Anaesthesia, nerve blocks and artificial intelligence | Association of Anaesthetists

Anaesthesia, nerve blocks and artificial intelligence

Anaesthesia, nerve blocks and artificial intelligence

“Show me where the nerves are, and I’ll block them…” (Consultant anaesthetist colleague)

The practical aspect of ultrasound-guided regional anaesthesia (UGRA) comprises two key clinical skills [1]: 

  • image interpretation, i.e. knowing what you’re looking at on ultrasound. 
  • viewing of needle insertion and injection, i.e. keeping the needle and relevant structures in view and watching the safe spread of local anaesthetic.

Ultrasound image interpretation, as for regional anaesthesia in general, requires a sound knowledge of anatomy. However, anatomical knowledge alone does not address the challenge of acquiring and interpreting ultrasound images to perform a peripheral nerve block safely and effectively. While improvements in ultrasound technology provide greater image resolution, developments in artificial intelligence (AI) may be employed to support the application of this technology to identify the salient sono-anatomy [2].

AI is ‘the ability of a computer programme to perform processes associated with human intelligence’ [3] . A subfield of AI called ‘computer vision’ has been the focus of particular attention with respect to medical image analysis. This uses many of the techniques outlined in Table 1 to enable computers to interpret the visual world. Of these, deep learning is especially useful as it can drive learning from large datasets; large databases of medical images are often readily available.

Imagine flying at night in an aeroplane over Los Angeles and taking a photo; the lights seen in your photo form a rough map of the features of the lit-up city. ( . . . ) Now imagine that you had a very special camera that could produce separate photos for house lights, building lights and car lights.

We have contributed to the development of a deep learning-based system called ScanNav Anatomy Peripheral Nerve Block (also known as ScanNav Anatomy PNB, formerly known as AnatomyGuide). This system uses deep learning to identify anatomical structures on B-mode ultrasound and apply a colour overlay to those structures in real time. This is achieved through the use of convolutional neural networks (CNNs; ConvNets) based on the U-Net architecture [4] (Figure 1). Data (greyscale ultrasound images subsampled to 160 x 160 pixels) that are entered pass through a series of computational (neural) layers, with each layer extracting specific feature information. In the initial ‘contracting’ path in Figure 1, each of the down-sampling layers on the left-hand side of the ‘U’ applies a series of convolutional filters to extract image features, and then halves the resolution for the next layer. The top layer (left) of the CNN looks for information at the level of individual pixels in the input image and draws out the obvious, more generalisable features such as edges and lines. Lower layers, with lower resolution, then look for coarser features that span larger regions to pull out features at larger scale. At the lowest level in the network, the entire image is represented by a 10 x 10 grid of features. Through down-sampling, the model can understand better what is present in the image, but it loses information about where those features are. In the subsequent ‘expanding’ path on the right-hand side, up-sampling layers apply further convolutional filters and successively double the resolution until the image is once again at the input resolution. The up-sampling helps the network understand where the features are in the image. ‘Skip connections’ (arrows from left to right) carry across features from the input to the output, bypassing lower layers of the network. This helps the network to reuse information from higher layers, which would otherwise become too abstract to be used further, so that it can learn to generate fine-grained details for the output segmentation (in this case, recognition of a given anatomical structure and application of a colour overlay).

Each layer of the model helps to provide a specific feature map of the image. An analogy of this is “Imagine flying at night in an aeroplane over Los Angeles and taking a photo; the lights seen in your photo form a rough map of the features of the lit-up city. ( . . . ) Now imagine that you had a very special camera that could produce separate photos for house lights, building lights and car lights. This is something like what the visual cortex does: each important visual feature has its own separate neural map. ( . . . ) And (very roughly) like the visual cortex, each layer in a ConvNet consists of several grids of these units, with each grid forming an activation map for a specific visual feature” [5].

Table 1. Commonly used terms in artificial intelligence

Machine learning Enables computers to learn (improve performance with experience). This often involves training an algorithm (rule-based problem-solving instructions executed by the computer) by exposing it to ‘training data’.
Supervised machine
learning
Training data is labelled, typically by human experts. The system learns to make associations between the label and the underlying data.
Unsupervised
machine learning
Training data is not labelled. Unsupervised systems learn underlying patterns and relationships by clustering data into groups that contain similar features.
Semi-supervised
machine learning
A small amount of training data is labelled and a much larger amount is unlabelled. The model learns from a combination of these data.
Deep learning A subfield of machine learning, deep learning uses networks which consist of multiple ‘neural layers’. The layers are arranged in a hierarchical manner to extract progressively more characteristics from input data.
Artificial neuron A mathematical function used in the field of artificial intelligence (the idea of a single functioning element was based upon the concept of a biological neuron).
Convolution The mathematical function executed by the artificial neuron. The function is applied to data points within an array (a grayscale ultrasound image, or the subsequent display after processing in a down-/up-sampling layer of the convolution neural network). The output is relayed to artificial neurons in the next layer.
Neural layer Connected computer processing units (artificial neurons) which each perform a specific function.
Convolutional neural network
Multiple layers of artificial neurons. Each neuron receives one or more inputs and creates a single output which it relays to elements of the next layer. In a fully connected network, all neurons in one layer are connected to all neurons of the next layer.

 

Figure 1. A simplified overview of the convolutional neural network used in ScanNav Anatomy Peripheral Nerve Block S - sartorius; AL - adductor longus; Fa - femoral artery; Sn – saphenous nerve; F - femur; conv - convolutional filter

A simplified overview of the convolutional neural network

During development of ScanNav Anatomy PNB, a separate network was created for each anatomical region of interest (i.e. the area scanned for each block). Ultrasound videos for each area were allocated at random to training (90%) or testing (10%; internal validation). Training data for a region consists of pairs of images. In each pair, the first element was an unmodified still frame image taken from ultrasound videos of the region of interest. The second element was a manually segmented (mark-up/labelled) colour overlay corresponding to that view. As still frame image pairs were presented, the network learned to make associations between the area of the colour overlay and the area on the underlying B-mode ultrasound image, and thus learned to recreate the desired output colour overlay when presented with an unlabelled input ultrasound image. The 10% of data reserved for testing was used to evaluate the network’s performance after training. This is a supervised machine learning process, in that the learning is directed by human input at each stage. A typical training set consisted of 115,000 pairs of still frame images for each network; overall over 800,000 images were labelled and used. An example of the colour overlay produced can be seen at end of the U-Net CNN graphic in Figure 1.

The authors have recently published results of the initial system evaluation, in which three independent experts in UGRA assessed colour overlays produced for the test data [6]. The experts judged the AI-driven colour highlighting to be helpful for identifying anatomical structures in 1330/1334 (99.7%) cases, and for confirming the correct ultrasound view in 273/275 (99.3%) ultrasound scans. The device has been granted regulatory approval for clinical use in Europe and is currently being reviewed by the regulatory body in the USA. Furthermore, an objective and quantitative assessment of the system is underway to establish the exact level of performance in relation to humans, both experts and non-experts. This will help to identify its position in current practice, the potential for future development, and its role in supporting training. While such technology is not without limitations and inherent inaccuracies, automated medical image interpretation systems already exist that approach and even surpass human performance in medical image interpretation [7, 8].

There is more to UGRA than anatomical knowledge and ultrasound image interpretation, but AI is perhaps on the verge of showing you the nerves…

Acknowledgements
We wish to acknowledge Dr Jeremy Mortimer and Dr Filip Zmuda for the illustrations. 

James Bowness
Consultant Anaesthetist, Aneurin Bevan University Health Board
DPhil Student, University of Oxford 

Alan JR Macfarlane
Consultant Anaesthetist, NHS Greater Glasgow & Clyde
Honorary Professor, University of Glasgow 

Alison Noble
Technikos Professor of Biomedical Engineering, University of Oxford 

Helen Higham
Consultant Anaesthetist, Oxford University Hospitals NHS Foundation Trust
Associate Professor, University of Oxford 

David Burkett-St Laurent
Consultant Anaesthetist
Royal Cornwall Hospitals NHS Trust, Truro

Twitter: @bowness_james; @ajrmacfarlane; @AlisonNoble_OU; @HelenEHigham

Conflict of Interests
JB and DBSL are Clinical Advisors and AN is Senior Scientific Advisor to Intelligent Ultrasound Limited. AJRM has acted as an independent reviewer of Intelligent Ultrasound data submitted for regulatory approval (FDA and CE). DBSL is the Lead Clinician for ScanNav Anatomy PNB.

Bowness Logo

References 

  1. Sites BD, Chan VW, Neal JM, et al. The American Society of Regional Anesthesia and Pain and the European Society of Regional Anaesthesia and Pain Therapy Joint Committee recommendations for education and training in ultrasound-guided regional anesthesia. Regional Anesthesia and Pain Medicine 2009; 34: 40-6. 
  2. Bowness J, El-Boghdadly K, Burkett-St Laurent D. Artificial intelligence for image interpretation in ultrasound-guided regional anaesthesia. Anaesthesia 2021; 76: 602-7. 
  3. Drukker L, Noble JA, Papageorghiou AT. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynaecology. Ultrasound in Obstetrics and Gynecology 2020; 56: 498-505. 
  4. Ronneberger O, Fischer P, Brox T. U-Net: convolutional neural networks for biomedical image segmentation. 2015; arXiv:1505.04597. 
  5. Mitchell M. Artificial intelligence: a guide for thinking humans. New York: Farrar, Straus and Giroux. ISBN-13: 978-0374257835. 
  6. Bowness J, Varsou O, Turbitt L, Burkett-St Laurent. Identifying anatomical structures on ultrasound: assistive artificial intelligence in ultrasound-guided regional anesthesia. Clinical Anatomy 2021;doi. org/10.1002/ca.23742. 
  7. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine 2018; 24: 1342-50. 
  8. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature 2020; 577: 89-94.

You might also be interested in: