Tony Chan, Assistant Director, Physical Sciences, NSF
Keynote I: The Brave New Old World of Design Automation Research, Ralph Cavin and Bill Joyner (SRC), and Wally Rhines (Mentor Graphics) (Hide/Unhide Abstract) (PDF)
We all remember the old EDA forecasts: the looming "productivity gap" showing the inability of design tools to keep pace with density and complexity increases, the "solved problem" status of placement and routing, the hopelessness of formal verification techniques. Despite these dire warnings, the unique partnership between university researchers and industry developers has remained at the foundation of EDA and, by extension, of the semiconductor industry. New and increasingly complex chips and systems are being designed, advances in placement continue, and formal methods have made it into the marketplace. Historically, the old world where synthesis, physical design, and layout were separated disciplines has morphed into the new but older Carver Mead-like world of tall, thin tool developers who need to know, even at the system level, what electromigration and NBTI are. The need for productivity improvements is pulling design up to the system level, while performance advances often require detailed knowledge of, even, physics.
Rumors of the stagnation and maturing of EDA are greatly exaggerated. History shows that growth opportunities in design automation come from solving new problems (and new problems abound); that adoption of leading-edge semiconductor technology is continuing, not slowing; and that innovation still drives a dynamic industry and research field.
Going forward, trends in even more diverse directions are what make research in design automation continue to be an exciting and critically important endeavor. At the top, applications from smart energy grids to cyber-physical systems to health care drive specialized designs and software as multi-core systems and parallelism make yet another comeback. At the bottom, new devices beyond CMOS and perhaps beyond silicon, including nanomorphic cells and biological interfaces, bring new challenges for design tools. Partnerships linking design automation with grounded theoretical efforts in computer science and optimization, mathematics, chemistry, biology, and mechanical engineering are essential.
overnment support of design automation, to cement these partnerships and extend the historic growth rates that sustain everything that rests on electronics, remains critical in this new, old, and increasingly multidisciplinary world. The 2006 NSF Report on Future Directions in Design Automation Research called for a National Design Infrastructure to focus on system design, robust optimization, and the interface to manufacturing, and emphasized design research as a national priority, as a key differentiator for national competitiveness, and as a critical enabler in partnership with process technology for advances in computing, and thus in science. It still is.
9:30am-10:45am: Session I: Automation and Abstraction
-
The Future of Electronic Design Automation: Methodology, Tools and Solutions, Sharad Malik, Princeton
(Hide/Unhide Abstract) (PDF)
Electronic Design Automation (EDA) has always been about electronic system design methodology. Establishing design methodology is critical in automating the design process. The tangible results of this endeavor are design tools which provide for significant productivity boost as well as improved design quality. For example, the Application Specific Integrated Circuit (ASIC) design methodology had several critical components that were essential for the subsequent development of EDA tools. The use of standard cells and synchronous timing while taken for granted today, were non-standard and driven in large part by research in design methodology. Their choice was essential to make the problem manageable and driven by not just what we needed to solve, but by what we could reasonably solve. Thus, part of solving the larger design problem has involved a constant re-defining of the sub-problems.
Going forward, EDA research needs to redouble this emphasis on design methodology. In the late silicon era, we are facing significant threats in lowering design costs and increasing design quality (broadly used to include power-performance-reliability). We need innovation not just in seeking solutions for these, but in helping define the sub-problems that we need to tackle. As before, this definition will be determined in large part by what can be reasonably solved. Radical innovations will be needed to change the status quo which seems ill equipped to handle design challenges over the next decade.
As an example, a promising new direction to help with verification and reliability challenges is to consider augmenting the design with resources for monitoring as well as recovery in the face of detected errors. This is very promising as it lowers pre-silicon verification cost by shifting the burden of detecting subtle errors to run time. However, novel analysis and synthesis techniques will be needed to enable this approach.
Similar solutions need to be explored for other threats to design cost/quality. These solutions may well span traditional area boundaries and include architectures and software. This will require a concerted research effort in design methodology. The tools will follow.
-
EDA - Electronic Design Automation or Electronic Design Assistance?, Andreas Kuehlmann, Cadence
(Hide/Unhide Abstract) (PDF)
The vision of a silicon compiler is as old as EDA itself - the dream of a "1-Click" design process that turns a high-level specification into a full mask set ready for manufacturing. Who would have imagined 30 years ago that designing an integrated circuit in the year 2009 is still dominated by navigating an ever-growing pile of individual tools stitched together by millions of lines of awk, sed, perl, tcl, and shell scripts. We are still mostly brute-forcing our way to validate design correctness; we are still wiggling the RTL source code to trick logic synthesis to do what we expect it to do; and we are still pushing polygons where physical design tools cannot be trusted. Yet, tremendous progress has been made on many frontiers from modeling ever growing complex interactions between design components to finding algorithms that scale to real-world instances of old and new problem formulations. Besides further improving core algorithms and expanding EDA into new domains such as system, software or manufacturability, will we ever get a handle on the core "automation" problem? This presentation will cover a few thoughts on this question.
-
Frontend SoC design: The neglected frontier, Arvind, MIT
(Hide/Unhide Abstract) (PDF)
Most EDA research seems to focus on backend tools to solve problems related to timing closure, routing, area minimization, clock distribution, component variability, reliability and verification. Even the most widely used formal technique – equivalence checking -- is used only on the final net-lists. A problem that is recognized but hardly studied is the lack of reuse of complex IP blocks. How can we develop designs that are malleable and can be used across technology generations and for different design points? The major impediment in the development of such designs is the archaic design languages, which have not seen any substantial change since early nineties and are quite primitive as compared to modern software languages. We argue that research into high-level languages that embody some degree of non-deterministic specifications, can express parallelism, and can be synthesized into efficient hardware structures hold the greatest promise to revolutionize SoC designs. Such research and development of tools requires close collaboration of computer scientists, who understand abstraction, types, logics and EEs who can help build the right abstractions for the underlying hardware substrate.
-
EDA Challenges in Systems Integration, Jochen A. G. Jess, Associate Member of IEEE
(Hide/Unhide Abstract) (PDF)
This contribution attempts to address the EDA problems and challenges in systems integration resulting from both the technological progress in semiconductor technology and the conditions of the market of electronic devices. The progress of semiconductor technology allows for increasingly larger functionality to be integrated on a single chip. However those chips are part of larger systems where those systems eventually establish a product. The economy of the products is for a large degree determined by the "Nonrecurring Engineering Cost" (NRE cost) of all its components. But recently the integration of all those components has become a significant cost factor. The integration involves (in the view of this author) the establishment of the entirety of the communication channels between processor cores, memories and peripherals, that is, the "system interconnect", intended to accommodate for the data traffic between those components and to the outside world. We are talking about bus technologies (f. i. AMBA AXI, PMAN for chips) and networking technologies, both on chip (ÆTHEREAL), on the board (PCIe) or on both simultaneously (UniPro, SlimBus).
The design and implementation of the system interconnect involves not only the assembly of complicated hardware components, but, to a much larger degree, the handcrafting of large composites of programs establishing the necessary communication protocols. Moreover the communication processes interfere with each other while contending for the available interconnect hardware, which makes the modularization of the design a tall order. Worse, the system interconnect determines substantially the system performance in terms of computational efficiency and power consumption.
The PC industry and, to a lesser extent, the mobile component industry conventionally solve this problem by adhering to long-term standards with a slow evolution (examples: FSB, Northbridge and Southbridge, AGP, PCI, ATA for PCs). This strategy enables an extensive reuse of components and software. The penalty is a certain lack of performance and a trend to heavily overdesign the hardware components. Yet this is not felt by the industry since it is in the position to determine its roadmap over a long period of time. Additionally it experiences a certain lack of competition. Other industries in smaller or more professional market segments, like for instance medical systems, defense, optical instruments, space technology, process control or (important for the near future) the control of smart power grids, can rarely exploit the full power of the most recent nodes of the semiconductor roadmap (with the possible exception of FPGAs) for cost reasons. Altogether this segment comprises some ten thousands of companies representing a market share comparable to that of the top ten players. A similar trend goes for consumer products. This is a mass market, but one with very high risks because of the erratic course of the customer taste. The NRE cost (together with the cost of fabrication) often enough prevent to recover the investment.
The future will most likely drive also those industries to exploit more aggressive semiconductor technology nodes. The performance increase according to Moore’s Law may level out in the next ten to fifteen years. In order to accommodate increasing performance requirements the industry most likely will aggressively try to handle the design complexity of higher density semiconductor technology. This represents a challenge to the entire area of EDA. In particular the NRE cost of system assembly and -interconnect design, including the generation of interconnect protocol software and performance assessment needs to be reduced.
In the talk an attempt to sketch an advanced design flow for systems integration will be undertaken. In addition the author will try to identify the bottlenecks and challenges in this flow.
11:00am-12:30pm : Session II: Verification and Test
-
Is Today’s Design Methodology a Recipe for a "Tacoma Narrows" Incident?, Carl Seger, Strategic CAD Labs, Intel Corp.
(Hide/Unhide Abstract) (PDF)
On November 7, 1940, the Tacoma Narrows Bridge collapsed into Puget Sound. This failure vividly illustrates the danger in extending best engineering practices in a linear fashion. In the following investigation report, one of the experts[1] wrote:
"The Tacoma Narrows bridge failure has given us invaluable information...It has shown [that] every new structure [that] projects into new fields of magnitude involves new problems for the solution of which neither theory nor practical experience furnish an adequate guide."
As chip manufacturing continues to follow Moore’s law, doubling the number of transistors available to designers every two years, extremely complex systems are being constructed in Silicon. Today, the limiting factor in the size and complexity of these systems is our ability to validate the designs from a functional perspective. At the same time, our society is becoming increasingly dependent on correctly functioning electronic infrastructure, raising the specter of life-threatening failures.
Today, functional validation is performed with a combination of emulation, simulation and formal verification, and effectively uses the same approach that was used in validating all previous generations of designs, except on a larger scale. Despite tremendous efforts (often more than 50% of the design effort), serious bugs and omissions escape the process and has to be dealt with by re-design, software workarounds, and/or recalls.
To begin addressing this validation crisis, both fundamental basic research as well as practical applied research is urgently needed. For example, work on finding good abstraction techniques (language, coding style, abstraction level, etc.), techniques to explicitly manipulate the abstraction levels (refinement techniques, different "views" of the same model, etc.), breakthrough formal verification techniques to relate models at different levels of abstraction, as well as formal risk assessment techniques are all needed to move our ad hoc design approaches into predictable and sound engineering. In addition, it is becoming abundantly clear that post-design validation is not a sustaining approach and that fundamentally different methodologies are needed. In particular, design and validation must likely be done concurrently instead of separately.
My talk will be a "call to arms" for attacking the problem of designing extremely large system in a predictable and correct manner.
-
Statistical Model Checking of Simulink models, Edmund M. Clarke, CMU
(Hide/Unhide Abstract) (PDF)
Stochastic systems arise naturally, for example, because of uncertainties present in a system’s environment (e.g., the reliability of communication links in a wireless sensor network, the rate of message arrivals on an aircraft’s communication bus, or the number of contenting peers in a Bluetooth device discovery phase). Uncertainty is usually modeled via a probability distribution, thereby resulting in stochastic systems, i.e., systems that exhibit probabilistic behavior. This raises the question of how to verify that a stochastic system satisfies a certain property. For example, we want to know whether the probability of a communication bus delaying a message is smaller than 0.001; or whether the system fulfills a request within 1ms with probability at least 0.99. In fact, several temporal logics have been developed in order to express these and other types of probabilistic properties. The fundamental verification problem is thus to prove that a stochastic model satisfies a temporal logic property with a probability greater than or equal to a certain threshold.
Unfortunately, the state space of stochastic models is often too large for standard (numerical) Model Checking techniques. A statistical approach to Model Checking, based on randomized sampling of the system’s traces and statistical hypothesis testing, may be an effective alternative. While the answer to the verification problem is not guaranteed to be correct, the probability of giving a wrong answer can be bounded. As a result, answers can usually be given much faster than with standard Model Checking techniques. We have introduced a new approach to statistical Model Checking, based on Bayes’s theorem and sequential sampling. We have successfully applied our approach to several representative Simulink/Stateflow models: a delta-sigma modulator, an automatic transmission controller, and a fault-tolerant fuel control system. The sequential character of our approach means that the number of sampled traces is not fixed a priori, but it is instead determined at "run-time". The use of Bayes’s theorem enables our algorithm to take advantage of previous knowledge about the model, where available. We have demonstrated that our algorithm generally leads to faster verification than state-of-the-art approaches, based on either statistical or standard techniques.
-
Deconstructing Concurrency Heisenbugs, Shaz Qadeer, Microsoft
(Hide/Unhide Abstract) (PDF)
Concurrency is pervasive in large systems. Unexpected interference among threads often results in ``Heisenbugs'' that are extremely difficult to reproduce and eliminate. We have implemented a tool called CHESS for finding and reproducing such bugs. When attached to a program, CHESS takes control of thread scheduling and uses efficient search techniques to drive the program through possible thread interleavings. This systematic exploration of program behavior enables CHESS to quickly uncover bugs that might otherwise have remained hidden for a long time. For each bug, CHESS consistently reproduces an erroneous execution manifesting the bug, thereby making it significantly easier to debug the problem. CHESS scales to large concurrent programs and has found numerous bugs in existing systems that had been tested extensively prior to being tested by CHESS. CHESS has been integrated into the test frameworks of many code bases inside Microsoft and is used by testers on a daily basis.
-
Test and Validation Challenges in the Late-Silicon Era, Tim Cheng, UC Santa Barbara
(Hide/Unhide Abstract) (PDF)
Current solutions for ensuring the viability of our system-chips, i.e., that manufactured chips are indeed working correctly, have been either pushed to the limits or have proven to be either cost-ineffective or inadequate in the face of enormous complexity, parametric variations, environmental variations, and aging. We need fundamental breakthroughs in design, verification, validation, and test technologies to continue to produce and maintain working chips at an affordable cost. Addressing these immensely complex challenges require collaborative research in all areas of system validation, software and hardware verification, post-silicon validation, manufacturing testing, and post-deployment resiliency. In this talk, we will present some of the challenges and new research opportunities in this domain.
-
A Faulty Research Agenda, Rupak Majumdar, UCLA
(Hide/Unhide Abstract) (PDF)
Faults and variations are becoming increasingly pronounced in emerging applications and technologies. We argue that fault-aware EDA is important, has many rich theoretical and engineering problems, and has applications beyond VLSI design. We discuss some initial progress, and outline a research agenda and open problems in the field.
12:30pm-1:30pm: Lunch
1:30pm-3:00pm : Session III: Electrical/Physical Design and Manufacturing
-
Numerical Modelling and Simulation for EDA: Past, Present and Future, Jaijeet Roychowdhury, University of California, Berkeley
(Hide/Unhide Abstract) (PDF)
Numerical modelling and simulation -- the progenitor of the field of EDA -- has been enjoying a resurgence in today's deep submicron era. This renewed interest has several roots: "SPICE-level accuracy" is becoming increasingly important in digital designs; mixed-signal/RF/MEMS blocks are proliferating; there is a need to auto-generate high-level descriptions from low-level ones; cross-fertilization with non-traditional areas of DA (like systems biology) holds much promise; etc.. In this talk, I will touch on some of the history of simulation in EDA and mention a few novel mathematical, numerical and software challenges/opportunities that arise today.
-
ANALOG CAD: NOT DONE YET, Rob A. Rutenbar, CMU
(Hide/Unhide Abstract) (PDF)
Today is roughly ten years after the introduction of the first wave of commercial analog synthesis/optimization startup companies. These startups harvested research from the 1980s and 1990s and tried to make the first automation dent in the mostly-manual methodologies of modern analog/RF design. As the founder of one of these "first wave" efforts --Neolinear Inc., started 1998, acquired by Cadence (NASDAQ: CDNS) 2004 -- I'll briefly review what we did right, what we did wrong, and what is still not solved. In particular, I will focus on three topics: (1) Why analog (mixed-signal) systems are very different from analog (transistor-level) circuits. (2) Why capturing design "intent" is so hard, and so vital, for analog. (3) Why design environments and interfaces are so important in analog, and why Adobe Photoshop comprises a much more "analog friendly" metaphor for design than Microsoft Powerpoint.
-
A Flat Earth for Design and Manufacturing, Jason Hibbeler, IBM
(Hide/Unhide Abstract) (PDF)
For advanced process technologies, the wall between design and manufacturing continues to erode dramatically. Manufacturing and design can no longer stand at the opposite poles of the globe and communicate with each other across the equatorial line of ring oscillators and manufacturing ground rules. In order to achieve robust and predictable designs in new technologies, we must re-envision our world as a flat earth, with information and concepts flowing more easily and efficiently between the two areas. Many novel techniques for model-based design have been proposed and deployed. We'll discuss experiences with a traditional rule-based hand-off flow, highlight some key advantages and disadvantages of rules vs. models for different design groups and design goals, and illustrate some specific recent advances in using models on the design side. Against the background of the increasing difficulty of maintaining reliable technology scaling, the need for a new design framework is clear. But for the design groups, the implications of new techniques on cost, schedule, and complexity must be correctly accounted for. We will suggest ways in which innovation can proceed profitably and effectively.
-
Collaborative Innovation of EDA, Design, and Manufacturing, Jyuo-Min Shyu, National Tsing Hua University
(Hide/Unhide Abstract) (PDF)
The semiconductor industry is facing technological and economical challenges. Not only the Moore’s Law is approaching the fundamental limits, but also the design and manufacturing costs of semiconductor chips increase rapidly, and only a few giga-fabs can provide manufacturing technologies and services to the industry. While chip designers still strive to meet the ever challenging design efficiency and accuracy requirements, more and more IDMs migrate to either fab-lite or fabless companies. To reduce the cost and risk, they join forces with research institutes, fabless companies, and leading design/manufacturing service companies to build and share the "Open Innovation Platforms" where EDA contributes to the computational foundations and computer algorithmic aids to deal with large complicated system design, nanometer design, and manufacturing issues such as embedded multi-core design, manufacturing and environmental uncertainties, 3D ICs, and post CMOS devices. Collaborative innovations are therefore expected to sustain the growth of the semiconductor industry.
- From Computability to Simulation, Optimization, and Back, Igor Markov, University of Michigan
(Hide/Unhide Abstract) (PDF)
One of grand challenges in computer science is to improve the understanding of what is computable, both in theory and in practice. To this end, motivated by practical needs, the EDA community has consistently pushed the envelope in physical simulation, formal verification, high-performance optimization, and other computational tools. Recent developments in physics-based computing offer a rich set of opportunities for EDA, e.g., to simulate physical systems that promise new computational powers. However, the EDA community must make a leap to consider computational simulation not just an engineering tool tied to commercial technologies, but also an instrument of scientific discovery.
The evolution of physical systems is often described by the principle of energy minimization, which is familiar to EDA researchers through simulated annealing, force-directed placement, and electrostatic analogies used in large-scale layout optimization. Vice versa, fundamental algorithms in EDA can be adapted to challenges in physics, such as energy minimization in Ising spin-glasses. Ising models are used, e.g., as a testbed for research in adiabatic quantum computing. Returning to the challenges of computability, we show how EDA techniques can be adapted well beyond their intended applications.
In addition to fruitful interdisciplinary interactions, EDA faces a number of major challenges in research and student training, which justify continued NSF funding. Besides the near-term research advocated by the industry, we need to understand the mathematical basis of existing EDA techniques, simplify and generalize them. Such results may bring great benefits to education, industry and further research. We also need to document the rapidly expanding body of EDA knowledge and train our students in fundamental skills, rather than latest buzzwords and acronyms.
3:30pm-4:15pm : Keynote II: Future IT Infrastructure Research Challenges: An HP Labs View, Prith Banerjee
(Hide/Unhide Abstract) (PDF)
The proliferation of new modes of communication and collaboration has resulted in an explosion of digital information. To turn this challenge into an opportunity, the IT industry will have to develop novel ways to acquire, store, process, and deliver information to customers — wherever, however, and whenever they need it. An "Intelligent IT Infrastructure," which can deliver extremely high performance, adaptability and security — will be the backbone of these developments. At HP Labs, the central research arm for Hewlett Packard, we are taking a multidisciplinary approach to this problem by spanning four areas: computing, storage, networking and nanotechnology. We are working on the design of an exascale data center that will provide 1000X performance while enhancing availability, manageability and reliability and reducing the power and cooling costs. We are working on helping the transition to effective parallel and distributed computing by developing the software tools to allow application developers to harness parallelism at various levels. We are building a cloud-scale, intelligent storage system that is massively scalable, resilient to failures, self-managed and enterprise-grade. We are designing an open, programmable wired and wireless network platform that will make the introduction of new features quick, easy and cost-effective. Finally, we are making fundamental breakthroughs in nanotechnology — memristors, photonic interconnects, and sensors — that will revolutionize the way data is collected, stored and transmitted.. To support the design of such an intelligent IT infrastructure, we will have to develop sophisticated system-level design automation tools that will tradeoff system-level performance, power, cost and efficiency.
4:15pm-6:00pm : Session IV: Extending Moore’s Law and EDA
-
Working Around the Limits of CMOS, Mary Jane Irwin, Penn State University
(Hide/Unhide Abstract) (PDF)
The design constraints of improved performance, better energy efficiency, increased reliability, and constrained design costs challenge EDA researchers as silicon technology continues to scale according to Moore’s Law. However, there are functions that our "standard" silicon technology – CMOS – just doesn’t do well. For functions such as global interconnects, on-chip non-volatile memory, and massive (high bandwidth) input/output, technologies other than CMOS combined with 3D integration holds great promise. For example, a network-on-chip in a second layer exploiting optical and/or RF technology can provide high performance, energy efficient, and reliable global interconnects. SRAM/DRAM memory stacking allows massively parallel memory access helping to mitigate the memory wall and dramatically reducing the large off-chip memory energy consumption. Additionally, stacking emerging non-volatile memory which is immune to radiation-induced soft errors can provide on-chip non-volatile storage while consuming zero standby power. Stacked layers of chemoresistive sensors, mass-sensitive nanoresonators, and biologically-selective FETs fabricated via a directed-assembly approach can provide radically new input/output mechanisms.
But to achieve the promise of 3D integration as a way to sustain Moore’s law as well as to enable More-than-Moore requires advances by the EDA community working with the design community, as well as interdisciplinary efforts with chemist, biologists, and material scientists. Fundamental research challenges for the designer include determining a functional partitioning that maximizes the benefits of vertical connections while achieving optimal performance and energy efficiency, designing the interface circuitry between the CMOS "brains" and the non-CMOS technologies, and ensuring temperature stability across and between layers. To meet these challenges, design methodologies and design tools necessary to implement and simulate/validate 3D architectures which integrate these new technologies and must be developed.
-
More Moore’s Law through Computational Scaling - and EDA’s Role, David Z. Pan, University of Texas at Austin
(Hide/Unhide Abstract) (PDF)
As optical lithography equipments are being pushed to their limits to print feature sizes of 32nm/22nm and below (e.g., using 193nm lithography with double patterning, EUV, etc.), it is expected that computational scaling will play an ever more important role to further extend the Moore’s Law. In this talk, I will discuss a couple of key aspects of computation scaling, e.g., (i) how to leverage the massive computational power to perform ultimate resolution enhancement and enable further feature size scaling through extreme computational lithography; (ii) how to perform synergistic computer-aided design and process integration through effective predictive models and machine learning techniques for higher performance, power, reliability, and yield. The role of parallel/multi-core computing and domain-specific computing (e.g., hardware accelerations) will be discussed as well. As the "easy" scaling (simply through equipment advancement) is over, the role of the computational scaling and EDA shall play a more important role in filling the gap.
-
Robotics-based fabrication and assay automation for In Vitro Diagnostics Technologies, Jim Heath, Caltech
(Hide/Unhide Abstract) (PDF)
Over the next 1-2 decades, health care will evolve from reactive medicine (disease is detected and treated in late stages) to a personalized, proactive and preventative medicine (presymptomatic disease is detected and treated on an individual basis). At the heart of this transformation will be a host of new, miniaturized technologies that permit biological information to be acquired and analyzed quickly and cheaply. With this motivation, my laboratory has been involved in developing technologies that can permit the rapid and inexpensive measurements of large panels of protein based biomarkers. In this talk, I will briefly discuss two separate technologies. The first a microfluidics-based chip, constructed from glass and plastic, that is designed as a stand-alone device that can separate plasma from whole blood, and then rapidly quantitate the levels of many proteins from that separated plasma. Robotics for chip construction, as well as automated protocols for assay execution, will be discussed. The second technology, which attempts to address the key bottleneck in in vitro protein diagnostics, is an approach to prepare chemically-synthesized protein capture agents with antibody like properties. The capture agents are stable and may be prepared in bulk quantities and stored as powders. I will focus on approaches towards making the development of such capture agents a high throughput approach.
-
Synthetic Biology: A New Application Area for Design Automation Research, Chris Myers, Univ. of Utah
(Hide/Unhide Abstract) (PDF)
EDA tools have facilitated the design of ever more complex integrated circuits each year. Synthetic biology would also benefit from the development of genetic design automation (GDA) tools. Synthetic biology has the potential to help us produce drugs more economically, metabolize toxic chemicals, and even modify bacteria to hunt and kill tumors. There are, however, numerous challenges to design the genetic circuits used in these applications. First, existing GDA tools require biologists to design and analyze genetic circuits at the molecular level, roughly equivalent to the layout level for electronic circuits. Another serious challenge is that genetic circuits are composed of very noisy components making their behavior more asynchronous, analog, and non-deterministic in nature. New GDA research is necessary to address these challenges. Interestingly, future electronic circuits may soon also face many of the same challenges which opens up the very intriguing idea that this research may in the future also be utilized to produce more robust and power efficient electronic circuits. This talk will briefly describe our first steps in the development of iBioSim, a GDA tool that supports higher levels of abstraction. This talk will also present some of the important theoretical and computational research problems in this area that will need to be addressed.
-
EDA and Biology of the nervous system, Lou Scheffer, Cadence
(Hide/Unhide Abstract) (PDF)
EDA tools developed over the last few decades have been very successful at analyzing the man-made circuits created by human designers, largely with semi-conductor technology. Now, for the first time, scientists are able to discover the networks of neurons that make up the computational aspects of biological systems. Even very small animals possess systems that are similar in terms of size and complexity with the ICs of today - for example, a fruit fly's brain is thought to consist of about 100K neurons and 100M connections. These systems have capabilities unmatched by current IC technology, in terms of real time response, adaptation, and learning. We need to understand how they work.
Although these networks operate electrically, the existing EDA tools are not well equipped to handle them. Here are a few of the differing features that need to be explored (a) Combinations of codings including pulse rate coding, analog levels, and pulse timing. (b) Macro-models and reduced order modeling, working over a wide dynamic range. (c) similar (not identical) sub-circuit matching. (d) learning and adaptation through several mechanisms - time varying circuits (both component values, which is somewhat supported) and the addition of new connections during operation, which is not, and correlated subsets of changes corresponding to neurotransmitters. (e) Integration of sensors, actuators and computation. Doubtless there are others as well.
There is a huge opportunity in analysing, and eventually constructing and optimizing, these systems. They solve problems we cannot currently address, and do so within very tight constraints. Computer tools are absolutely required to address systems of the size and scale of even small nervous systems, and EDA technology is by far the best base to work from, but without significant enhancement it cannot address the operation of biological systems. Therefore, in terms of both scientific and technical opportunity, it makes sense to extend the current EDA tools in this direction. This will require close cooperation between the existing EDA community and the communities of biologists and physicists who are beginning to study the nervous system in detail.
6:30pm-9:00pm : Dinner
Willow Restaurant
4301 N. Fairfax Drive
Arlington, VA 22203
(703) 465-8800
*Willow Restaurant provides discounted parking:
Each Willow guest receives discounted parking in the underground Colonial parking lot (entrance on Taylor Street north of Fairfax Drive). At dinner and on weekends Willow guests are charged $2.50 per car with no time limit. To receive these discounted rates parking tickets must be validated at Willow Restaurant.
If you choose to park on the street, Arlington County is VERY SERIOUS about ticketing and towing vehicles not parked within their very specific guidelines - which they enforce with alarming efficiency. Please read the meter hours and rates carefully. Our discounted, validated parking in our building covers all guests of the restaurant at lunch (valid for two hours from 11am- 3pm) and dinner.
The restaurant is ADA accessible from the garage.
Day 2 - July 9 - Thursday
7:00am-8:00am: Breakfast
8:00am -10:00am: Group discussions
10:00am-10:30am: Break
10:30am-12:30pm : Session V
-
Presentation by the theory group and discussions
-
Recommendations and Conclusions
12:30pm-1:30pm: Lunch