LabAutomation 2006 - SLAS
LabAutomation 2006 - SLAS
LabAutomation 2006 - SLAS
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>LabAutomation</strong><strong>2006</strong><br />
4:30 pm Tuesday, January 24, <strong>2006</strong> Track 4: Informatics Room: Madera<br />
Wyndham Palm Springs Hotel<br />
Ton van Daelen<br />
Co-Author(s)<br />
SciTegic<br />
Robert D. Brown<br />
San Diego, California<br />
tvd@scitegic.com<br />
Mathew Hahn<br />
An Enterprise Platform for Data and Application Integration<br />
Today’s laboratories are generating a vast amount of disparate data that must be captured and organized before it can be successfully<br />
exploited. At the same time the software industry is producing a large number of disparate applications to manage and mine the data.<br />
Data pipelining provides a new paradigm for integrating both the data and the various applications that act on it. Data pipelining provides<br />
a mechanism to federate data that can be easily modified as new or changed data sources become available. The federated data<br />
can be manipulated on the fly or uploaded into a data warehouse (with pipelining providing the ETL capability). The method inherently<br />
captures best practice workflows making data and application integration solutions easy to maintain, share and document. This paper<br />
will discuss strategies for applying data pipelining to data and application integration projects. Data pipelining also enables workflows to<br />
be implemented that make novel joins between data from different disciplines. We will show examples that generate knowledge based on<br />
joining data flows from genomic and small molecule sources.<br />
9:00 am Wednesday, January 25, <strong>2006</strong> Track 4: Informatics Room: Madera<br />
Wyndham Palm Springs Hotel<br />
Ajit Jadhav<br />
Co-Author<br />
NIH Chemical Genomics Center<br />
Rockville, Maryland<br />
ajadhav@mail.nih.gov<br />
Yuhong Wang<br />
Research Informatics in Probe Discovery at the NIH Chemical Genomics Center<br />
Advances in automation, liquid handling and data analysis have lead to high-capacity integrated technologies enabling the assay and<br />
analysis of greater than one million biological reactions per day. Perhaps more important than output is the potential of these technologies<br />
to improve data quality generated from high-throughput screening campaigns. Currently, the industry standard for the primary screen is to<br />
test compounds at a single concentration. However, for the purposes of creating a chemical genomic map of the cross-section between<br />
chemical space and biological activity, a more thorough analysis of each compound is required. We developed a strategy to generate<br />
concentration-response curves on large compound collections using existing technologies. These high throughput IC50s/EC50s are<br />
being used to drive chemistry for the development of probes that can be utilized as modulators to study biological systems. An effective<br />
informatics framework that supports these activities is critical to the success of our operations. The development of an integrated platform<br />
that combines commercial and inhouse solutions to address the management, analysis and visualization of data that spans biology,<br />
chemistry, and qHTS operations will be described.<br />
88