Artificial intelligence, medical imaging and workflow automation

During the last decade, one of the most popular technologies in computer science must have been artificial intelligence.

Online behavioural analysis, big data forecasts, meteorology, self-driving cars or speech recognition are all based upon technologies that in some degree employ artificial intelligence. But what is exactly this ‘artificial intelligence’? How intelligent it actually is? And last but not least, should we fear it?

So what is artificial intelligence?

Traditionally, when a developer writes a piece of code, for example let’s say a Java app, it usually does a set of computations or provides a piece of functionality and the way it does it is very meticulously described in the programming that ultimately makes it work. The efficiency with which the app will perform depends on the resources that it has available, but two identical apps hosted on two identical servers will always perform the same way and that won’t change over time given the underlying resources are also constant.

Here’s the interesting part with artificial intelligence (we’ll call it AI for short) – the more data you throw at it, the better it gets with time at solving the initial task it was given.

When will it be smarter than humans?

Probably never. Not in this form anyway. For example, let’s say you have an algorithm that needs to identify streetlights at crossroads. Initially, you would have to feed it all sorts of images with streetlights labeled clearly ‘streetlights’ or ‘no streetlights’ which the algorithm will use to ‘learn’ what a streetlight actually looks like and how it’s positioned at crossroads, how it distinguishes itself from trees or other objects in the background and so on.

When this quantity of images gets big enough and the algorithm will have seen tens or maybe hundreds of thousands of labeled images with streetlights, it will be able to invert the process and thus being given unlabelled photos of all sorts of roads, it will be able to detect which crossroads have streetlights and which haven’t. So being able to ‘evolve’ or become more efficient over time at the task at hand being given enough quantities of data to ‘learn’ from is what determined this kind of technology to be called ‘intelligent’.

Sure, after a while it will be more efficient at identifying streetlights at crossroads than you and me, but that doesn’t mean it’ll be more intelligent than any of us any time soon.

Is judgement day a real threat?

Well, not exactly. Firstly, AI is used to solve problems it was originally trained to solve. That means having initial data that needs quality labelling (‘streetlight’ or ’no streetlight’, right?) and then feeding it back to the algorithm. So ultimately, the algorithm will just be able to identify streetlights, but that’s just about it.

You’ve probably heard of ‘machine learning’, but that only has a more frightening sound to it, it’s basically the same thing – it’s just a piece of software that’s good at identifying patterns of all sorts. There’s a lot more to go until artificial intelligence will be able to become a threat to mankind, if that happens at all.

Some of the world’s greatest minds and entrepreneurs have already given good thought to the subject and even wrote an open letter that warns about the dangers we might face, but at this point there really is no immediate need for concern. https://futureoflife.org/ai-open-letter

Radiologist workflow automation using artificial intelligence

Remember how artificial intelligence was great at identifying patterns in images? This tech is used in self driving cars, software that enables face recognition or smartphone apps that aptly identify objects in photos. At Medicai, we employ the software in a very similar way, we use it to identify patterns in MRI, CT or PET-CT scans. For example, one way our researchers are using this technology is for organ segmentation (or to distinguish individual organs from the background or foreground).

Imagine that you have a chest MRI and you need to track the evolution of what seems to be a tumour. What you’d need is a radiologist that manually analyses the MRI test and tries to distinguish the tumour from the background and foreground elements. Afterwards, he’d need an efficient way to calculate its volume so he has a starting point in tracking its evolution. And in order to do it, he’d need to go through all this process once more with a follow-up MRI test where he’d need to repeat all these tasks for comparison. Here’s where the technology comes in – we’ve developed an algorithm that analyses MRI scans and is automatically able to segment organs (or distinguish them from the background) and even calculate their volume.

The automation of this part of the radiologist’s workflow has a huge impact on patient wellbeing – radiologists no longer need to waste time to do these tasks manually and have a far more precise means to compare tissue evolution at their fingertips. Just get the new tests, run the software and see the difference between volumes. This means more time for patients and efficiency where it really matters – in providing patients with the best possible treatment.

 

Get the free guide about A.I.

About the author - Mircea Popa

Mircea Popa is the CEO and co-founder of Medicai. Mircea previously founded SkinVision, a mobile app designed to detect melanoma (skin cancer) through ML algorithms applied on images taken with smartphones. He believes that a multidisciplinary approach to medicine is possible only when everyone has access to a better way to store, transmit and collaborate on medical data.