Picture the scene: An R&D lab where ground-breaking research is taking place.
“Will this new molecule work in the treatment of Alzheimer’s?” the scientist asks.
A computer responds. “Gathering data …. Analysing data … Working… Analysis complete. This new molecule has a 56% chance of showing effects in the treatment of Alzheimer’s but only in the male population with gene mutation X.”
This is how the utopian state of R&D will work if scientists have their way, but getting there will push artificial intelligence (AI), and all associated technologies that fall under that banner, to the extreme.
We are currently seeing a massive uptake of machine learning, deep learning and assisted intelligence in the R&D space – not just in life sciences, but in all areas of product development where science plays a part. Investment organisations, who think they have found their next big opportunity to make money, are even publicly advertising for AI-based companies to hand their cash to.
But the big question is, how close are we to the point where AI fulfills the role we hope it will?
We need to be realistic about what we hope AI can become, and what role it can fulfil today. Looking beyond the media hype, we should sit back and take all of it with a pinch of salt. After all, we are still in a phase of technology evolution and technology acceptance and validation. There are plenty of stories in the media about robot overtaking man, but we may not be there just yet.
Don’t lose heart though; there are definitely promising signs – the upsides and opportunities are mind boggling, while still a lot of work needs to be done.
The change we’ve seen in the last decades has already been phenomenal. Back in the early 90s I worked with neural networks as part of my PhD and – with limited compute power – the results were encouraging, even without the huge data sets and massive compute power available today. With the advent of cloud access, and chip sets designed to massively improve compute performance, the stage is now set and the foundations are in place to enable AI concepts to be proven in the real world.
There are already examples of deep learning algorithms that are able to match or exceed human-based decisions. The catch, however, is that these algorithms can’t learn on their own just yet: they need to be taught. And this teaching, for the time being at least, needs to come from humans.
For ‘deep learning’ to work, there needs to be a defined relationship between the inputs and the outputs. In the lab, we used to call it the ‘fudge factor’ and to get a fudge factor that is good at predicting, it needs as much training data as you can give it – but remember, more data isn’t always the answer, as your success depends on the quality and accessibility of this data.
So, AI tools absolutely have a role to play in today’s R&D world – just look at deep learning for an example of automating a human process and continually optimising it to make it more accurate. We already see this in the areas of pathology and disease identification, where AI takes what a human does (diagnosing a disease based on a set of inputs – images, test results, etc.) and automates it.
Some areas of R&D do have reasonable data, and good quantities of it – but often this data is not connected and is not accessible. There are new emerging companies that are looking to change this by providing ‘data lakes’ and enrichment tools that allow the data to be polished and rendered usable. Furthermore, some tech companies are leveraging systems and algorithms from other industries to assess the quality of data sets so that they can be weighted in the algorithm training.
This is great news for R&D – as it could provide both more data and better quality data.
But yet, for hope to meet reality, we still need to do more work: the next trick will need to be understanding what questions need answering and what the data can actually answer – as you can imagine, these are not always aligned – meaning very high expectations are being laid at the feet of the AI community which can’t always be met (well, not yet at least!).
The next few years will be very interesting both from a technological and commercial, point of view; will organisations combine their data to help train and advance the algorithms and insights? We have seen this in the pre-competitive space in the pharma industry, but will it move to be more commercially-driven and competitive? My money says yes – and AI will be at the forefront.