Welcome To Support Community

BLOG

Advanced Search
Ask Search:
Historic PostHistoric Post (BIOVIA)  
Offering insight from the perspective of a Pipeline Pilot and Materials Studio user, Accelrys is pleased to host a posting written by guest blogger Dr. Misbah Sarwar, Research Scientist at Johnson Matthey. Dr. Sarwar recently completed a collaboration project focused on fuel cell catalyst discovery and will share her results in an upcoming webinar. This post provides a sneak peek into her findings..."In recent years there has been a lot of interest in fuel cells as a "green" power source in the future, particularly for use in cars, which could revolutionize the way we travel. A (Proton Exchange Membrane) fuel cell uses hydrogen as a fuel source and oxygen (from air), which react to produce water and electricity. However, we are still some time away from driving fuel cell cars, as there are many issues that need to be overcome for this technology to become commercially viable. These include improving the stability and reactivity of the catalyst as well as lowering their cost, which can potentially be achieved by alloying, but identifying the correct combinations and ratios of metals is key. This is a huge task as there are potentially thousands of different combinations and one where modeling can play a crucial role.As part of the iCatDesign project, a three-year collaboration with Accelrys and CMR Fuel Cells funded by the UK Technology Strategy Board, we screened hundreds of metal combinations using plane wave CASTEP calculations.In terms of stability, understanding the surface composition in the fuel cell environment is key. Predicting activity usually involves calculating barriers to each of the steps in the reaction, which is extremely time consuming and not really suited to a screening approach. Could we avoid these calculations and predict the activity of the catalyst based on adsorption energies or some fundamental surface property? Of course these predictions would have to be validated and alongside the modeling work, an experimental team at JM worked on synthesizing, characterizing and testing the catalysts for stability and activity.The prospect of setting up the hundreds of calculations, monitoring these and then analyzing the results seemed to us to be quite daunting and it was clear that some automation was required to both set up the calculations and process the results quickly. Using Pipeline Pilot technology (now part of Materials Studio Collection) protocols were developed which processed the calculations and statistical analysis tools developed to establish correlations between materials composition, stability and reactivity. The results are available to all partners through a customized web-interface.The protocols have been invaluable as data can be processed at the click of a button and customized charts produced in seconds. The timesaving is immense, saving days of endless copying, pasting and manipulating data in spreadsheets, not to mention minimizing human error, leaving us to do the more interesting task of thinking about the science behind the results. I look forward to sharing these results and describing the tools used to obtain them in more detail in the webinar, Fuel Cell Catalyst Discovery with the Materials Studio Collection, on 21st July."
Best Answer chosen by Historic Post (BIOVIA) 
Historic PostHistoric Post (BIOVIA) 
Posted
TimTim (Accelrys)  
Informatics in High Content Screening (HCS) is reshaping the mix of scientists driving drug discovery efforts. In the early days of HCS I worked closely with electrical, mechanical and software engineers to develop better systems for image acquisition and processing. My responsibilities as an HCS biologist involved painstaking hours of sample preparation and cell cultures and constant enhancements to my materials and methods section for preparing my biological specimens for imaging. I was motivated by the many new collaborative efforts that beganwith the software engineers, the systems engineers and the machine vision scientist developing HCS systems. I found myself teaching basic concepts of biology as I learned about illumination and optics, piezoelectric drives for auto focusing and, of course, the strings of zeros and ones that would eventually tell me what happened to my protein. It was exciting for me to be part of a cross functional team developing new applications by piecing together advances in hardware, image processing and biological assay technologies.High Content Screening systems and vendor software has come along way since my introduction to the technology ten years ago. Vendors struggled between giving end users powerful, flexible systems and ease of use (1). The bottleneck has shifted from application development to data informatics . Software systems in HCS have evolved to integrate databases and other related sources for chemical structures, target characteristics, and assay results. Today, Icollaborate with colleaguesin HCSin new areas that include data mining, principal component analysis, Bayesian modeling, decision trees, and data management.The mix of HCS conference speakers and attendees has shifted from what had primarily been assay developers to a growing population of informaticians and IT experts. Talks have moved beyond assay design and system development to incorporate more downstream data processing. We have worked on complex fingerprinting methods for predicting characteristics of a compound for such things as predicting mechanism of action or how it might affect a particular biological pathway involved for example, in neuronal stem cell differentiation. Vendors are moving to more open systems for image processing and are integrating more third party applications into their HCS acquisition systems to keep up with the shifting bottlenecks and emerging solutions. Informaticians have been able to improve data analysis efforts and significantly reduce the number of man-hours required for downstream data analysis (2). I've been fortunate in having been able to develop relationships with experts at most of the leading HCS instrument companies. My journey has been one of constant growth and continuous learning. I"m anxious to know what"s coming next in High Content Screening and eager to learn from my ever growing network of scientific experts.1. High-Content Analysis: Balancing Power and Ease of Use by Jim Kling2. Data Analysis For A High Content Assay Using Pipeline Pilot": 8x Reduction in Manhours from a poster by L. Bleicher, Brain Cells Inc
Best Answer chosen by Tim (Accelrys) 
TimTim (Accelrys) 
Posted
TimTim (Accelrys)  
"Why do drugs affect some people more than others?" This was a question posed to me by one of my coworkers in the midst of our 6 mile after-office run last week. I was thinking, "why is it so easy for you to talk while running after 5 ?« miles?!" After numerous postulations, our meeting of the minds came to the conclusion that genetic diversity was at least part of the answer. Have you ever wondered if you"ve taken drugs that have actually had no positive effect? In a recent article in the New York Times, author Andrew Pollack describes how various drugs on the market don"t work as intended in, often times, all too much of the population. "Pfizer"s Xalkori for lung cancer, works wonders " but only for the roughly 5 percent of patients whose tumors have a particular chromosomal abnormality". Pollack goes on to describe the regulatory and payer pressures on the drug companies to come up with a measurable test for determining which patients are benefiting from the drugs and which ones may be subjected to harmful side effects with no chance of benefit. Regulatory authorities have spoken and given guidance on the subject and may even be helping drug companies get their drugs to market faster, get to peak sales faster and reduce cost in failing drugs earlier or by selecting patients more suitable for successful clinical trials. But some argue that this cost cutting will pale in comparison to the enormous losses that the drug companies will face by limiting their products only to those who will benefit from its use (reading statements like the ones found here make me really glad that regulatory authorities exist).Biomarkers are often the corner stone of these diagnostic tests and can be found in areas of research like next generation sequencing (NGS), molecular modeling and imaging. For example, a test done to sequence a particular region of your DNA could tell you whether or not you had the "chromosomal abnormality" that would render the drug effective. The cost of sequencing has dropped tremendously since the worldwide effort of sequencing the human genome that began in the 1990"s - estimated in the $3 Billion dollar range. Today new technologies in this area have the industry expecting that we will see the cost of a full human genome drop below $1000 in this decade. But sequencing alone doesn"t provide answers. Today the cost of analysis and interpretation of results will play a role in determining the total cost and implied benefit of tests that rely on sequencing. What if the patient and the payer shared in the cost of development of the diagnostic test? I, for one, look forward to the day I can pay a few hundred dollars and determine my best course of action. I was at a recent conference where one of my coworkers spoke on how after extensive genetic testing the attending team was able to offer her a few options with assigned success probabilities. Personalized medicine had arrived. Imagine the impact if even our over the counter supplements could be linked to personalized benefit through a companion diagnostic more immediate than weight loss or muscle tone. It will be here sooner than you think, just like mile 6 was for me.
Best Answer chosen by Tim (Accelrys) 
TimTim (Accelrys) 
Posted
RuthRuth 
Next Generation Sequencing produces huge quantities of data,currently up to 60 million sequences per file. Algorithms used to analyse these data load all the information from one file into computer memory in order to process it. With the growth in data volumes these algorithms are beginning to slow down. This is a problem noted for algorithms which detect new forms of RNA and quantify them in RNA sequencing experiments.In his talk at the 'High Throughtput Sequencing Special Interest Group' (HitSIG) Adam Roberts from Berkeley, CA discussed his new online algorithm 'EXPRESS', designed to interpret RNA sequencing data. (Roberts and Pachter, 2011 in press, Bioinformatics).Online algorithms process data arriving in real-time. The models generated are updated a sequence at a time. Therefore, the amount of memory required stays constant whatever the volume of data processed and there is no need to save the data if it will not be analysed again later.Online algorithms would fit very naturally in Pipeline Pilot data piplines. They would also fit well with the new real-time sequencing technologies such as the Oxford Nanopore GridION system. The GridION system already uses Pipeline Pilot to control it's 'Run until ..sufficient' workflows.Bringing all three technologies together would allow data interpretation to be generated directly from the sequencing machine and the flood of data could be directed straight into the most useful channels.Fig 1 an RNA sequencing experiment showing a known and a newly discovered form of RNA and the depth of the sequences used to identify them, along one region of the mouse genome.
Best Answer chosen by Ruth
RuthRuth
Posted
MarcMarc 
Antibodies are increasingly important in medical diagnostics and in the treatment of a broad range of disease states including cancer, inflammation and auto-immune diseases. Through antibody drug conjugates (ADCs), antibodies also enable the targeted delivery of traditional drugs. In contrast to traditional chemotherapeutic agents, ADCs target and attack the cancer cells so that healthy cells are less severely affected.Many critical properties of antibodies cannot be predicted from sequence alone and so building computational models is seen as a cost-effective aid in antibody design, maturation and formulation processes. Specifically, some important properties that can be estimated from structural models are antigen affinity (avidity), stability and aggregation propensity. Due to the high structural similarity between different antibodies and recently improved methods, structure prediction has been shown to give accurate antibody structure models.The second Antibody Modeling Assessment (AMA-II) is a community-wide experiment organized by Pfizer and Janssen R&D in 2013 to assess the state of the art in antibody structure modeling. The participants included prediction teams from Dassault Syst??mes BIOVIA (previously Accelrys), Chemical Computing Group, the Jeffrey Gray Lab at Johns Hopkins (Rosetta Antibody), Hiroki Shirai and Astellas Pharma, Inc., Macromoltek, Schr??dinger and the fully automated PIGS (Prediction of Immunoglobulin Structure) server. The goal was to predict the 3D structure of 11 antibody variable domain targets given their amino acid sequence. The targets covered different antigen binding site conformations and represented a variety of species including human, mouse and rabbit. It was a blind-prediction study. The participants did not know the actual X-ray structures during the prediction phase.Our team consisting of Marc Fasnacht, Ken Butenhof, Anne Goupil, Francisco Hernandez-Guzman, Hongwei Huang and Lisa Yan has recently published a paper entitled "Automated Antibody Structure Prediction using Accelrys Tools: Results and Best Practices" in "Proteins: Structure, Function and Bioinformatics" (Wiley Online Library). The main BIOVIA (previously Accelrys) tool used in the study was the Discovery Studio life sciences modeling and simulation application.During the prediction phase, each team member chose to build their targets using either a single, chimeric or multiple template approach for the framework region. Then the hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. The templates were selected by Discovery Studio, but with flexibility of human intervention. In the post-analysis phase, the team carried out a systematic study of the modeling methods employed during the "blind phase" for framework modeling, e.g. single, chimeric and multiple template approaches using a fully automated template selection process. In addition, they also examined the factors affecting the refinement of the complementary determining regions (CDRs).The analysis of the models shows that Discovery Studio enables the construction of accurate models for the framework and the canonical CDR regions, with backbone root-mean-square deviations (RMSDs) from the X-ray structures on average below 1 ?? for most of the regions. The fully automated multiple-template approach matched or exceeded the best manual results. The antibody model assessment shows that the submitted models are high quality, with local geometry assessment scores similar to those of the target X-ray structures.Our team"s approach as published in Proteins demonstrates the reliability of automated template selection for framework and CDR regions based on a curated database. The automated methods in our study generate models as good as or better than those with manual intervention. You can access the paper here: "Automated Antibody Structure Prediction using Accelrys Tools: Results and Best Practices"
Best Answer chosen by Marc
MarcMarc
Posted
WalkerrWalkerr 
I started the Thanksgiving Week thankful that it was a short week. It needed to be short to recover from the action packed agenda at the 2009 Chemical and Biological Science and Technology Conference in Dallas the week prior. The conference was a huge success on many different levels. Although, the conference has been ongoing for the last few years, this was a first time combination of physical science and medicinal disciplines and to the credit of the conference organizers, it was well done! There were over 1400 people in attendance with over 600 poster presentations and countless oral presentations; however, even with the number of people and logistical challenges that existed, this was one of the best events I have attended (and I have attended a few).I enjoyed the conference from the perspective that I was able to connect with former colleagues and make new friends (accomplices). However, the most important aspect for me was hearing about some of the great work that is going into making our country safer. The science and technology is cutting edge and driving innovation in so many different disciplines.As mentioned in a previous post we were honored to present two posters;Nancy Miller Latimer presented a poster on "Using Data Pipelining to Analyze Biological Threats: A Biomarker Case Study".AndDr. Nick Reynolds presented a poster on "Applications of nanoscale simulations methods for understanding the structure and mechanisms of chemical sensors".Both posters were well received and very applicable to the technology challenges that we face; Dr Reynolds and Ms. Miller Latimer directed and managed the traffic expertly (and there were a lot of people at the presentation).Over the past few conferences that I have been to, data management and integration is becoming an increasing concern to all the Federal Agencies as more and more data intensive programs come into existence. From new drug and vaccine discovery to biometrics, the data produced for use and reuse is overwhelming legacy systems and there is increasing focus on how to address this challenge.Addressing this challenge and back to Ms. Miller Latimer"s discussion on Data Pipelining;She demonstrated "data pipelining", using Pipeline Pilot", in a biomarker case study for ALI (Acute Lung Injury). As part of this analysis, Pipeline Pilot was used to analyze and integrate mass spec proteomics data with gene expression data and sequences using data pipelining. Additionally, this study also showed how to automatically mine the literature analysis results for differentially expressed genes/proteins and then publish enterprise-wide interactive solutions via web portals.To underscore the interest in this integrative and flexible capability, Ms. Miller Latimer"s work was recognized as the best poster overall (over 600 poster were presented). I was proud to be there as she received the award from Colonel Michael O"Keefe, Deputy Director, Chemical/Biological Technologies, Defense Threat Reduction Agency. Accelrys is proud of Ms. Miller Latimer"s contribution. Well Done!!
Best Answer chosen by Walkerr
WalkerrWalkerr
Posted
BrownfBrownf 
Accelrys has recently concluded a series of meetings with a specially convened Biological Registration Special Interest Group , (SIG), formed between several major pharmaceutical companies and Accelrys. The objective of this forum was to understand some of the critical market and product requirements needed in order to build a state-of-the-art Biologics Registration system.The success of the SIG can be attributed to the customer members being very open towards one another, in spite of being competitors, and the tremendous diligence each company put into specifying user requirements. This open and collaborative approach to software development has become an innovative way to introduce first of a kind technology into the market.First of a kind software is usually developed as a bespoke project for a single company and then modified over time to meet the needs of the wider market. This can create disadvantages for early adopters as the product functionality evolves and improves with subsequent releases. This situation can be avoided by getting a wider set of requirements through a collaborative SIG formed of a diverse and representative sample of interested parties.The ability to capture and prioritize a wider set of requirements through leading companies discussing and debating the relative merits and benefits of proposed features, is a more efficient and effective way of understanding market requirements than more traditional methods. The approach also enables the development team to capture feedback and more rapidly create a product that should be attractive to the wider market. The anticipated result is the timely delivery of a product that is well positioned to capture both broad interest and market share.Have you innovated through collaborative work groups? If so, we would welcome the chance to learn from your experience.
Best Answer chosen by Brownf
BrownfBrownf
Posted
BrownfBrownf 
In my last blog, I talked about how improved global collaboration in the Cloud is not only improving Neglected Diseases research but also the "exuberance quotient" of science. Today our current economic woes tied to the sovereign debt crisis have got me thinking about the darker cloud hovering over researchers today, one that may very well threaten "exuberant" science in the months ahead, especially in university labs.There"s no way you can look at today"s economic situation and postulate that government funding of scientific research in academic labs is going anywhere but down. It stands to reason that this will drive changes in behavior and requirements for Academia to find funding alternatives for their research.First, university labs will need new ways to collaborate externally, not only with colleagues at other institutions but with those at the many commercial companies that will likely end up funding more and more academic research as government sources dry up. Second, they will need viable channels for commercializing the technology they develop, so that new applications, protocols and processes emerging from university labs become readily available to the wider scientific community (while also providing a return revenue stream supporting university research). Last but not least, with university researchers under increasing pressure to publish results, secure patents and acquire grants in the face of shrinking budgets and resources, they need simplified access to affordable software and services -- and we just took steps towards that end with our recently announced academic program.This new academic paradigm and resulting wish list become much more achievable when university researchers deploy their technology on a scientific informatics platform that"s already widely used in the commercial world. This provides a built-in installed base and ready market for workflows and protocols. A widely deployed platform with the ability to capture a protocol as a set of XML definitions enables scientists working in the same environment to replicate an experiment or calculation with drag-and-drop simplicity and precision. If you start with the same data set, you end with the same results. Experiments are more reproducible, academic papers more credible and, most importantly, non-experts can advance their research using robust, expert workflows.Academic researchers drive innovation that impacts the larger scientific community, but getting the innovation out there is still a challenge. In this regard, an industry-standard platform can also serve as the basis for an innovative new marketplace, a kind of scientific application exchange, where academics and their partners can expose their breakthrough technologies to a wider audience"and even charge a fee for using them. In the present economy, this new channel could provide much needed additional funding and a feedback loop for academic groups, enabling them to continue their vital research.What are your thoughts on surviving"and perhaps even thriving"in today"s down economy?
Best Answer chosen by Brownf
BrownfBrownf
Posted
deborah.ausmandeborah.ausman 
Symposium is just one week away, and the final agenda is now available. Three keynote speakers will address attendees next week in Barcelona: Ashley George, director of the strategic IT portfolio for discovery at GlaxoSmithKline, will speak about enterprise cloud computing in drug discovery Paul McKenzie, VP of biologics pharmaceutical product development and marketed product support at Johnson & Johnson Jason Bronfeld of Bristol-Myers Squibb will speak about how to realize value through strategy-driven informaticsIf you are not able to attend the conference in person, follow the meeting on Twitter at #sym2010 or at SymyxTech. And stay tuned to this blog for real-time summaries from the meeting rooms and exclusive interviews with presenters and attendees.
Best Answer chosen by deborah.ausman
deborah.ausmandeborah.ausman
Posted
dominicjdominicj 
Wikipedia defines catalysis as "the change in rate of a chemical reaction due to the participation of a substance called a catalyst. Unlike other reagents that participate in the chemical reaction, a catalyst is not consumed by the reaction itself. A catalyst may participate in multiple chemical transformations."For over a millennium scientists have turned to catalysts to accelerate and improve chemical transformations. Is there more than meets the eye here though? What if we took the catalyst concept and applied it to the scientific business around us? Today we know the life sciences industry is looking to transform itself by accelerating innovation, speeding time-to-market and improving competitiveness. Instead of tactically "band-aiding" and making slow business transformations, can we look for an innovation catalyst"something that accelerates change without being consumed in the process?Here"s one opinion from someone who is admittedly focused on scientific informatics. A catalyst for accelerating industry transformation lies in an informatics investment that reduces the activation energy required to connect scientists, information and software across the research-to-manufacturing continuum. Improved scientific innovation lifecycle management delivered via a common, scientifically aware informatics platform can play a positive role in achieving this critical business need.Let"s take a closer look at the challenges we face and the "catalytic" transformation we seek.Industry trends including increased cost pressures, globalization, externalized R&D and government regulation are driving life sciences organizations to:Foster scientific innovation and new productsAccelerate products from lab to marketImprove product quality and complianceReduce the cost of R&D operationsImprove operational efficiency within a global networked R&D business modelToday"s globally networked business model is a critical driver of this need for transformation. At present, it is not uncommon for a life sciences organization to have 20 or more partnerships within a single therapeutic area. Outsourcing to external centers of excellence is motivating organizations to reassess their existing internal processes and systems. How well are they meeting the needs of global scientific teams operating across diverse geographic, business and cultural boundaries?The need for improved collaboration is another driver of change. With disparate systems capturing study, project, experiment and sample data across the innovation lifecycle, project teams are often faced with inconsistent, siloed information that is difficult to transfer from one development stage to another. As a result, valuable knowledge and insight are lost. Scientists are unable to coordinate and collaborate effectively.Finally, the evolution of FDA regulations and guidance around Quality by Design (QbD) is also driving a critical reassessment of scientific informatics today. To embrace QbD, organizations need rich insights into the design space affecting product and process quality. Most importantly, scientists need this understanding early in the development cycle and iteratively throughout the design-test-manufacture pipeline.Scientific Informatics: A Positive "Catalyst" for ChangeA common, scientifically aware informatics platform for R&D can serve as a catalyst for transforming innovation throughout the drug development cycle from early experimentation to volume production. This enterprise approach to R&D informatics, one that makes critical scientific information available to multiple stakeholders across today"s networked R&D environment, is better aligned with today"s distributed R&D ecosystem.Replacing paper-based processes with electronic workflow and process documentation is also a critical step in improving innovation productivity. Paper processes are inefficient and prone to errors; they are also not searchable, not traceable, and they hinder collaboration and information sharing. In contrast, a modern informatics system built on a common platform with integrated electronic laboratory notebooks supports a QbD strategy that lowers compliance costs and improves product quality"catalyzing the delivery of better therapeutic products, faster and at lower cost to patients.From an informatics perspective, what do you think is necessary to transform life sciences research today?
Best Answer chosen by dominicj
dominicjdominicj
Posted