Leverage Artificial Intelligence In DiagnosisOpen Source Platform For Health Professionals

We are currently collecting data in order to build and train our models

OUR INITIATIVE

WHAT WE DO

Coronavirus disease (COVID-19) is an infectious disease caused by a new virus that had not been previously identified in humans. The imaging appearance of COVID-19 in radiographic and CT indicates severe damage to Lungs, however, this appearance is not specific to the disease and can be seen with other infections, too.

Our Data Scientists and AI Engineers have a solid record in using Deep Learning technology for developing software which detects any pattern and feature in datasets.

In our platform, we implement Deep learning technology, a subfield within Artificial Intelligence which has done remarkably well in image classification, segmentation and processing tasks in past years. Our proposed Software contains multiple convolutional neural networks (CNN), which are collectively called PolyNet. PolyNet inputs a chest X-ray image and outputs the probability of pneumonia along with a heatmap localizing the areas of the image most indicative of pneumonia. We construct the initial network to make one of the following four predictions: a) no infection (normal), b) bacterial infection, c) non-COVID viral infection, and d) COVID-19 viral infection. The rationale for choosing these four possible predictions is that it can aid clinicians to better decide who should be prioritized for PCR testing.

It is most crucial in this pandemic to predict the possible procedure for treating patients with Covid-19 fast and efficient based on the severity of infection, therefore we use Life-Long Deep Neural Network (L-DNN) in order to enable our software to continue learning from radiologists while being in practice. The software will be able to suggest measures after recognising the Covid-19 Pneumonia and the severity of the infection in the Lungs. The suggested procedures comply with the guideline provided by the Robert Koch Institute.

The following describes our steps in implementing AI in Covid-19 risk stratification:

Data Preparation

01

The main requirement for developing models with high precision is to have sufficient data. Data, in this case, come from various sources around the world and need to be collected, enriched and prepared.

Training & Optimisation

02

Our machine learning and deep learning models will be trained with at least 5 other diseases which have similar infection symptoms to Covid-19.

Deployment

03

The models are going to be deployed to a SaaS on cloud and licensed free of charge for any health organisation around the globe. This would only be possible under the condition of the software being legally and academically approved.

How can you participate in our research

Builing a precise Deep Learning models requires a considerable amount of data. As the Covid-19 has been spreading since couple of months ago, there are not enough data publically available . If you are a healthcare organisation, researcher or clinican, you can provide us your resources and our team will prepare the data for developing our own software as well as for a data repository for other AI engineers and researchers.

Is the software are ready to use?

Not yet, our AI Engineers and Data scientists are in development process. Our know-how is proven by our experience in developing similar Image Classification products.

How do we use the Metadata?

The Metadata will be processed and enriched in order to be used for training of our Deep Learning and Machine Learning Algorithm.

Would be the Data available for other research institutions?

Yes, our goal is to make the database available to other organisations for current and future research and studies.

How important is chest radiographs in diagnosing Covid-19?

Probably X-ray or CT-scan isn’t the primary procedure in diagnosing Covid-19 infection, however, in most suffer cases, these would be needed be definitely required in order for physicians to see the extent of the infection. As the number of patients with saviour cases grow, this would quickly overload the Radiology Department, ER, ICU or the whole medical and the whole medical facility at once.

Privacy policy and consent

The data are being handled fully anonymously, however, the patient or a legal guardian should have given his or her consent for sharing data with a third party. The data provider is fully responsible for this matter.

In order to prevent any legal infringement, we ask all the institutions to comply with their jurisdiction territory. Kipoly is not responsible in any terms for legal disputes regarding the data provider at any time.

Exclusive Data transfer with Kipoly: Some institutions may prohibit the provided data from being publicly open or further use by any other organisations.

What is the Metadata schema?

Here is a list of each metadata field, with explanations where relevant:

1- Do not include Patients name address or any private information
2- For easier data processing, please upload 2 files, a .CSV file including metadata and a file including the CT or XRs images.
3- Patient id (internal identifier, just for this dataset)
4- Offset (number of days since the start of symptoms or hospitalization for each image, this is very important to have when there are multiple images for the same patient to track progression while being imaged. If a report says “after a few days” let’s assume 5 days.)
5- Sex (M, F, or blank)
6- Age (age of the patient in years)
7- Finding (which pneumonia)
8- Survival (did they survive? Y or N)
9- View (also for example, PA, AP, or L for X-rays and Axial or Coronal for CT scans), please in .dicom,.dcm, .jpeg, .jpg or .png format – ATTENTION: Posteroanterior view is mostly prefered
10- Modality (CT, X-ray, or something else)
11- Date (the date the image was acquired)
12- Location (hospital name, city, state, country) importance from right to left.
13- Filename
14- DOI (DOI of the research article)
15- Url (URL of the paper or website where the image came from)
16- License, if there is any
17- Patient has already been intubated (YES / NO)
18- Clinical notes (about the radiograph in particular, not just the patient)

other notes (e.g. credit)

*Source: sirm.org

learn more

We use Deep learning technology, a subfield within Artificial Intelligence which has done remarkably well in image classification, segmentation and processing tasks. Our proposed Software contains multiple convolutional neural networks (CNN), which collectively call them PolyNet. We construct the initial network design prototype to make one of the following four predictions: a) no infection (normal), b) bacterial infection, c) non-COVID viral infection, and d) COVID-19 viral infection. The rationale for choosing these four possible predictions is that it can aid clinicians to better decide not only who should be prioritized for PCR testing for COVID-19 case confirmation. PolyNet inputs a chest X-ray image and outputs the probability of pneumonia along with a heatmap localizing the areas of the image most indicative of pneumonia.

A significant subject for improving artificial intelligence is continual learning, or lifetime learning, the ability to master successive tasks without forgetting how to execute previously learned tasks. The primary goal of continual learning is to overcome the forgetting of learned tasks and to leverage the earlier knowledge for obtaining better performance or faster convergence/training speed on the newly coming tasks. According to our Radiology Advisory at medical school of Hannover), it is crucial and most needed in this pandemic to predict the possible procedure for treating patients with Covid-19 fast and efficient, therefore we use Life-Long Deep Neural Network in order to enable our software to continue learning from Radiologists while being in practice. The software will be able to suggest measures after recognising the Covid-19 Pneumonia and the severity of the infection in the Lungs. The suggested procedures comply with the guideline provided by Robert Koch Institute.

We aim to develop the software further and train similar models with CT-scan images so the software will be able to perform same tasks for CT-scan images as input as well.

  1. Winther HB, Gutberlet M, Hundt C, Kaireit TF, Alsady TM, Schmidt B, et al. Deep semantic lung segmentation for tracking potential pulmonary perfusion biomarkers in chronic obstructive pulmonary disease (COPD): The multi‐ethnic study of atherosclerosis COPD study. J Magn Reson Imaging [Internet]. 2019 Jul 5 [cited 2019 Jul 6]; Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.26853
  2. Maschke SK, Winther HMB, Meine T, Werncke T, Olsson KM, Hoeper MM, et al. Evaluation of a newly developed 2D parametric parenchymal blood flow technique with an automated vessel suppression algorithm in patients with chronic thromboembolic pulmonary hypertension undergoing balloon pulmonary angioplasty. Clin Radiol. 2019;74(6):437–44.
  3. Dewald CLA, Meine TC, Winther HMB, Kloeckner R, Maschke SK, Kirstein MM, et al. Chemosaturation Percutaneous Hepatic Perfusion (CS-PHP) with Melphalan: Evaluation of 2D-Perfusion Angiography (2D-PA) for Leakage Detection of the Venous Double-Balloon Catheter. Cardiovasc Intervent Radiol. 2019 Oct;42(10):1441–8.
  4. Behrendt L, Voskrebenzev A, Klimeš F, Gutberlet M, Winther HB, Kaireit TF, et al. Validation of Automated Perfusion-Weighted Phase-Resolved Functional Lung (PREFUL)-MRI in Patients With Pulmonary Diseases. Journal of Magnetic Resonance Imaging. 2019;
  5. Winther HB, Hundt C, Schmidt B, Czerner C, Bauersachs J, Wacker F, et al. ν-net: Deep Learning for Generalized Biventricular Mass and Function Parameters Using Multicenter Cardiac MRI Data. JACC: Cardiovascular Imaging. 2018 Jan 17;2479.
  6. Maschke SK, Hinrichs JB, Renne J, Werncke T, Winther HMB, Ringe KI, et al. C-Arm computed tomography (CACT)-guided balloon pulmonary angioplasty (BPA): Evaluation of patient safety and peri- and post-procedural complications. Eur Radiol. 2018 Sep 12;
  7. Winther H, Hundt C, Czerner C, Kaireit T, Wacker F, Shin H, et al. Vollautomatische, lappenbasierte Segmentierung von MR-Pefusionsmessungen in COPD Patienten mit Methoden des maschinellen Lernens. In: RöFo – Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren [Internet]. Georg Thieme Verlag KG; 2017 [cited 2018 Apr 18]. p. SP 105.5. Available from: http://www.thieme-connect.de/DOI/DOI?10.1055/s-0037-1600185
  8. Czerner CP, Winther HB, Zapf A, Wacker F, Vogel-Claussen J. Breath-hold and free-breathing 2D phase-contrast MRI for quantification of oxygen-induced changes of pulmonary circulation dynamics in healthy volunteers. J Magn Reson Imaging. 2017 Dec 1;46(6):1698–706.
  9. Brochhausen C, Winther HB, Hundt C, Schmitt VH, Schömer E, Kirkpatrick CJ. A Virtual Microscope for Academic Medical Education: The Pate Project. Interactive Journal of Medical Research. 2015;4(2):e11.
  10. Brochhausen C, Schmitt C, Schmitt VH, Hollemann D, Weinheimer O, Mamilos A, et al. LATE-BREAKING ABSTRACT: Comparative studies on bronchuswall-thickness by histologic and computed tomographic measurements of porcine lungs. European Respiratory Journal. 2015 Sep 1;46(suppl 59):PA3737.

Invited Talks:

  1. Winther HB, Vogel-Claussen J. Radiomics & Deep Learning am Herzen: eine kritische Bestandsaufnahme. Invited talk presented at: 99. Deutscher Röntgenkongress; 2018 Apr.
  2. Winther H, Hundt C. Deep Semantic Lung Segmentation for Tracking Clinical Biomarkers of Chronic Obstructive Pulmonary Disease [Internet]. Invited talk presented at: GPU Technology Conference; 2018 Oct; ICM, Munich, Germany. Available from: http://on-demand-gtc.gputechconf.com/gtc-quicklink/gsYEQ

Selected Talks:

  1. Winther H, Hundt C, Schmidt B, Czerner C, Bauersachs J, Wacker F, et al. Deep Learning für die automatische Bestimmung von klinisch relevanten Herzparametern mittels Kardio-MRT. In: RöFo – Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren [Internet]. Georg Thieme Verlag KG; 2018 [cited 2018 Dec 28]. p. WISS 207.5. Available from: http://www.thieme-connect.de/DOI/DOI?10.1055/s-0038-1641251
  2. Verloh N, Jürgens J, Hundt C, Ringe K, Wacker F, Schmidt B, Winther HB. Verwendung eines Neuronalen Netzwerkes zur Lebervolumenbestimmmung im 3T MRT. In: RöFo – Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren [Internet]. Georg Thieme Verlag KG; 2018 [cited 2018 Dec 28]. p. WISS 306.2. Available from: http://www.thieme-connect.de/DOI/DOI?10.1055/s-0038-1641249
  3. Chieu VV, Winther H, Hundt C, Schmidt B, Lenzen H, Manns M, et al. Automatische Detektion der primär sklerosierenden Cholangitis (PSC) anhand von 3D-MRCP Datensätzen mittels Deep Learning. In: RöFo – Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren [Internet]. Georg Thieme Verlag KG; 2018 [cited 2018 Dec 28]. p. WISS 306.1. Available from: http://www.thieme-connect.de/DOI/DOI?10.1055/s-0038-1641250
  4. Winther HB, Hundt C, Schmidt B, Czerner CP, Wacker FK, Vogel-Claussen J. Cardioblaster: Determination of Clinical Cardiac Parameters from Short-Axis MRI Scans using Deep Learning. Oral presented at: RSNA 2017 Annual Meeting; 2017 Nov 27; Chicago.
  5. Winther HB, Hundt C, Czerner CP, Kaireit T, Wacker FK, Shin H-O, et al. Vollautomatische, lappenbasierte Segmentierung von MR-Pefusionsmessungen in COPD Patienten mit Methoden des maschinellen Lernens. Oral presented at: 98. Deutscher Röntgenkongress; 2017 May 24; Leipzig.
  6. Vogel-Claussen J, Winther HB, Zapf A, Wacker FK. Quantifizierung sauerstoffinduzierter hämodynamischer Veränderungen im Truncus pulmonalis von gesunden Probanden mittels zweidimensionaler Phasenkontrast-MRA. Oral presented at: 98. Deutscher Röntgenkongress; 2017 May 26; Leipzig.
  7. Winther HB, Czerner CP, Hundt C, Kaireit T, Wacker FK, Shin H-O, et al. Fully-Automated Multi-Atlas Lobe Based Lung Segmentation of Lung Perfusion MR Images Using Machine Learning Techniques in COPD Patients. Digital Poster presented at: RSNA 2016 Annual Meeting; 2016 Nov 28; Chicago.
  8. Winther HB, Brochhausen C, Brochhausen M, Topaloglu U, Kirkpatrick CJ. A new biobank concept to optimize the quality of data-rich specimens. Virchows Arch. 2014 Aug 1;465(Supplement):295.
  9. Winther H, Brochhausen C, Affeldt H, Horstmeyer J, Kirkpatrick CJ. Demand-Driven Development of Next Generation Whole Slide Imaging – Pate [Internet]. Medicine 2.0’13; 2013; London. Available from: http://www.medicine20congress.com/ocs/index.php/med/med2013/paper/view/1708

Our strategy is to gain more data from visual data and prepare an open-source comprehensive dataset for  our own research and others now or in the future. Our Core system is going to be trained frontal view X-ray images, however, our plan is to develop our platform even further and build new deep learning models trained on lateral views and CT-scans and even NLP models for processing patients info much faster.

We are developing an algorithm that can detect pneumonia caused by Covid-19 from chest X-rays at a level exceeding practicing radiologists. Our algorithm is designed to distinguish Covid-19 from other 14 disease. We are training convolutional neural network first on 15 disease along side using the largest publicly available chest X-ray dataset by Stanford University, containing over 100,000 frontal view X-ray images with 14 diseases.

Our second model distinguishes Viral and Bacterial Pneumonia which is being trained on over 6000 X-rays.

For our third model we hope to gather enough data in order to train a new model for distinguishing Covid-19 and MERS, SARS, and ARDS.

What is Deep Learning?

Deep Learning, is based on neural networks, the foundations have existed since the early 1940s. Due to the growing computing power and the simplified access to huge amounts of data (Big Data) the attention increased and enabled the use of neural networks on a completely new level. Today we use them to convert structured and unstructured information from images, text and other sources into numerical values and interpret them.

How can Deep Learning be classified?

The digital data that accumulates today in everyday life and in companies can be divided into unstructured and structured data. While structured data have a normalized form and can be stored in row- and column-oriented databases. Unstructured data has an unidentifiable data structure and cannot be stored easily in conventional, SQL-based databases. Examples of unstructured data are text files, presentations, videos or images and other types of data. Big Data applications provide functions that enable processing, storage and analysis of unstructured data.
And this is where the term deep learning comes in.

Deep Learning is a subset of Machine Learning and Artificial Intelligence, but there are differences.
The main difference between Deep Learning and the “classical” Machine Learning is the ability to process unstructured data by artificial neural networks (ANN). The “classical” machine learning is primarily related to the processing of static data and methods that do not use artificial neural networks, in contrast to deep learning methods, where data processing is realized exclusively by the algorithm of the ANN.

How does a ANN work?

The so-called deep learning is thus created by the complexity of the ANN and its function can be broken down as follows:
The nodes of the given network structure, the neurons, are initially assigned a random initial weight, which can be varied and adapted in the course of the bias. The obtained input data gets weighted by the neurons with their individual weight. The results of the calculation are passed on layer by layer to further neurons via the respective network connections and get reweighted. At the output layer, the overall result is calculated and displayed.
At the beginning of the learning process, as with any machine learning method, this overall result will not always provide correct information and errors will occur. However, these are necessary to enable learning, because the errors and the share of each neuron in the learning process can be calculated. The goal is to minimize the error in each run piece by piece. This is achieved by adjusting the weight of each neuron via the bias.

Continual Learning: Academics and practitioners alike believe that continual learning (CL) is a fundamental step towards artificial intelligence. Continual learning is the ability of a model to learn continually from a stream of data. In practice, this means supporting the ability of a model to autonomously learn and adapt in production as new data comes in. Some may know it as auto-adaptive learning, or continual AutoML. The idea of CL is to mimic humans ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. Its known that in machine learning, the goal is to deploy models through a production environment. With continual learning, we want to use the data that is coming into the production environment and retrain the model based on the new activity.

In our approach, we use a convolutional network (CNN) as the task network to illustrate how RCL adaptively expands this CNN to prevent forgetting, though our method can not only adapt to convolutional networks, but also to fully-connected networks. Our method searches for the best neural architecture for coming task by reinforcement learning, which increases its capacity when necessary and effectively prevents semantic drift. We aim to implement both fully connected and convolutional neural networks as our task networks, and validate them on different datasets. Firstly, we will develop new strategies for RCL to facilitate backward transfer, i.e. improve previous tasks’ performance by learning new tasks. Moreover, how to reduce the training time of RCL is particularly important for large networks with more layers.

On our platform only verified health specialists can help in fine-tuning the models by interacting with software and give feedback on the test results, so the software continues learning and optimise its predictions automatically. Our goal is to use collective human intelligence in order to make state of the art Artificial Intelligence.

Contact Us

    Phone & Email

    +49 (0) 511 475395-0
    [email protected]

     

    Working hours

    Monday to Friday – 8am to 5pm

     

    OFFICE ADDRESS

    edicos consulting & software GmbH & Co.KG.

    Leisewitzstraße 4
    30175 Hannover
    Germany

     

    Careers

    Send your portfolio and CV.
    We will answer ASAP