For saGDH, I set up a Run Summary webpage that gives an overview of what information we have for each run and the various scripts that I used to get it: http://www.jlab.org/~singhj/runsummary I have a bunch of perl scripts that extract and summarize EPICS information from the start/end of run summaries and the raw data files into a MySQL database. Probably the two most useful scripts are: (1) http://www.jlab.org/~singhj/runsummary/get_possible_epics.perl (2) http://www.jlab.org/~singhj/runsummary/get_all_epics.perl this script pulls out all the EPICS info directly from a raw data file. finally, i wrote a HALOG Archive search tool using the built-in text search/pattern matching capabilites of MySQL. for the halog entries made during saGDH, i wrote a fancy one: http://www.jlab.org/~singhj/runsummary/search.php for all HALOG entries from 1998 to the end of 2005, i simplified it (for various reasons) to do only faster keyword searches: http://www.jlab.org/~singhj/runsummary/search_all_halog.php