Already a Bloomberg.com user?
Sign in with the same account.
Companies such as Novartis and Intel are at the forefront of Science 2.0 by encouraging open systems of collaboration
Earlier this month, Swiss drugmaker Novartis did something rather unusual—and almost unheard of in the high-stakes, highly competitive world of Big Pharma. After investing millions trying to unlock the genetic basis of type 2 diabetes, the company released all of its raw data on the Internet. This means anyone (or any company) with the inclination is free to use the data—no strings attached.
Type 2 diabetes and related cardiovascular risk factors—including obesity, high blood pressure, and high cholesterol—are among the most common and most costly public health challenges in the industrialized world. Pinpointing their precise genetic origins could unlock a treasure trove of new medicines and result in a major windfall for Novartis (NVS) shareholders.
So why the giveaway? "These discoveries are but a first step," says Mark Fishman, president of the Novartis Institute for BioMedical Research. "To translate this study's provocative identification of diabetes-related genes into the invention of new medicines will require a global effort."
In other words, the research conducted by Novartis and its university partners at MIT and Lund University in Sweden merely sets the stage for the more complex and costly drug identification and development process. According to researchers, there are far more leads than any one lab could possibly follow up alone. So by placing its data in the public domain, Novartis hopes to leverage the talents and insights of a global research community to dramatically scale and speed up its early-stage R&D activities.
It's worth noting that Novartis didn't reveal everything. For example, it didn't give away three years' worth of its own observations on the data, which gives it a substantial lead time on other companies attempting to exploit the research. Meanwhile, the close ties and goodwill that Novartis has fostered with the research community studying diabetes will give it an advantage over competitors as it moves to the next stage of research.
The Novartis collaboration is just one example of a deep transformation in science and invention. Just as the Enlightenment ushered in a new organizational model of knowledge creation, the same technological and demographic forces that are turning the Web into a massive collaborative work space are helping to transform the realm of science into an increasingly open and collaborative endeavor. Yes, the Web was, in fact, invented as a way for scientists to share information. But advances in storage, bandwidth, software, and computing power are pushing collaboration to the next level. Call it Science 2.0.
The Earth System Grid (ESG), for example, is an experimental data network that integrates supercomputing power with large-scale data and analysis servers for scientists collaborating on climate studies. Once the first of its kind, the project is now one of several virtual collaborative environments that link distributed centers, users, models, and data throughout the U.S. Data for the project is being collected from a wide range of sources, including ground and satellite-based sensors, computer-generated simulations, and thousands of independent scientists. The grid will accelerate the execution of climate models 100-fold and allow scientists to perform high-resolution, long-duration simulations that harness the community's distributed data systems. The ESG's founders anticipate the project will revolutionize our understanding of global climate change.
Wider Peer Review
Indeed, in just about every discipline, plummeting computing and collaboration costs are encouraging the formation of large-scale research networks. A decade ago, disciplines such as astronomy were still driven by small groups of scientists keeping observational data proprietary and publishing individual results. With projects like the Sloan Digital Sky Survey, astronomy is now organized around massive data sets that are shared and coded by the community. The free and open exchange of information and ideas will provide astronomers with an unprecedented map of the universe in a fraction of the time it would have taken using conventional methods.
As large-scale scientific collaborations become the norm, scientists will rely increasingly on distributed methods of collecting data, verifying discoveries, and testing hypotheses not only to speed things up but to improve the veracity of scientific knowledge itself. For example, rapid, iterative, and open-access publishing will engage a much greater proportion of the scientific community in the peer-review process. Conventional paper-based scientific journals, meanwhile, will be augmented by dynamic publishing tools such as blogs, wikis, Web-enabled RSS feeds, and podcasts that turn scientific publications into living documents. Projects such as MIT's OpenWetWare are already doing this.
There will always be aspects of scientific inquiry that are painstakingly slow and methodical. But scientific institutions can take steps to encourage mass collaboration. Discarding the outmoded, manual data-permission policies that currently thwart the ability to share data would enable scientific Web services to weave together information from all of the world's databases. Teams of scientists that invest heavily in collecting data, and understandably feel justified in retaining privileged access to it, could apply Creative Commons licenses that stipulate rights and credits for the reuse of data, while allowing uninterrupted access by networked computers.
Leading scientific observers already expect more change in the next 50 years of science than in the last 400 years of inquiry combined. As the pace of science quickens, there will be less value in stashing new scientific ideas, methods, and results in subscription-only journals and databases, and more value in wide-open collaborative-knowledge platforms that are refreshed with each new discovery. These changes will enhance the ability of scientists to find, retrieve, sort, evaluate, and filter the wealth of human knowledge, and, of course, to continue to enlarge and improve it. Meanwhile, faster feedback cycles from public knowledge to private enterprise, enabled by more nimble industry-university networks, will allow new knowledge to flow more quickly into practical uses and enterprises.
As mass collaboration takes root in the scientific community, companies have an opportunity to rethink how they do science, and even how they compete. One area in which new open scientific collaborations can pay dividends is in the early detection of disruptive innovations that could threaten a company's product roadmap—or, even better, generate entirely new products and services.
Take Intel (INTC). Accelerating technological change and heightened competition from Asian semiconductor companies have put the heat on the veteran chipmaker. In order to stay ahead, Intel needs to expand into new offerings and find ways to add value to chips, which are increasingly low-cost commodities. The problem for companies like Intel is that the kind of exploratory research required to renew product roadmaps and identify disruptive innovations is the most costly and risky. So like a growing number of businesses in fast-moving, tech-intensive industries, Intel is sharing these costs and risks through an open and collaborative model of industry-university partnerships.
Divinding the Fruits
Sure, industry-university partnerships have existed for centuries. But in the new model, companies and their collaborators don't squirrel themselves away in secretive laboratories and retain proprietary access to all of the data and outputs. Instead, they open up the early-stage research to the world in order to widen participation and accelerate discoveries, while positioning themselves to move strongly and rapidly into a latent market as new ideas and inventions emerge.
Intel established exploratory research labs adjacent to leading research centers such as Berkeley, Cambridge University in Britain, Carnegie Mellon, and the University of Washington. Intel provides the funding and each lab houses 20 Intel employees and 20 university researchers. While each lab has a unique focus—such as ubiquitous computing or distributed storage—the research teams from each lab meet regularly, as Intel has found that some of most promising insights and applications flow from unexpected synergies that arise when teams from different institutions discuss their research.
And rather than wrangle over who gets to control and exploit the fruits of joint research efforts, Intel and its academic partners sign Intel's open collaborative research agreement, which grants nonexclusive rights to all parties. Both sides retain their freedom to engage in further research, develop new products, and partner with other players. Like the pharmaceutical industry, Intel is finding that the benefits of casting a wide net for new ideas and learning rapidly from the external research ecosystem greatly outweigh the advantages gained from keeping the basic scientific research proprietary.
More than Good Manners
The results so far seem to justify Intel's approach. In the four years since the first exploratory lab was launched, research in areas such as polymer storage, micro-electromechanical systems (MEMS), optical switching, inexpensive radio frequency (RF), and mesh networking has matured more quickly than expected. Many ideas have already been transferred downstream toward product development.
In the end, close cooperation with leading universities helps Intel maintain its edge, while spreading the upfront costs of R&D across a much broader research ecosystem. By leveraging its university connections skillfully, Intel gains access to the results produced by the bulk of the research community without sacrificing its ability to exploit the research in the downstream stages of product development.
The bottom line is that sharing knowledge and data in scientific communities is not just good playground etiquette, it's about growth, innovation, and profit. By sharing basic scientific data and collaborating across institutional boundaries, companies like Novartis and Intel are challenging a deeply held belief that early stage R&D activities are best pursued within the confines of secretive laboratories. As a result, both were able to cut costs, accelerate innovation, create more wealth for shareholders, and ultimately help society reap the benefits of scientific research more quickly.
A Better Base
What's more, this logic of sharing doesn't apply only to science. "Just as it's true that a rising tide lifts all boats," says Tim Bray, director of Web technologies at Sun Microsystems (SUNW), "we genuinely believe that radical sharing is a win-win for everyone. Expanding markets create new opportunities." Under the right conditions, the same could be said of most industries, from automobiles to other consumer products.
Of course, companies need to protect critical intellectual property. But they can't collaborate effectively if all of their IP is hidden. Contributing to the scientific commons isn't altruism—it's the best way to build vibrant business ecosystems that harness a shared foundation of knowledge to accelerate growth and innovation.