A new study by scientists from the Massachusetts Institute of Technology and the Massachusetts General Hospital (MGH) shows that the day is approaching when advanced artificial intelligence systems can help anesthesiologists in the operating room.
In a special edition of Artificial intelligence in medicine, a team of neurologists, engineers and physicians demonstrated a machine learning algorithm for continuous automation of propofol anesthetic dosing. Using the application of deep learning with reinforcement, in which neural software networks simultaneously learned how dose selection supports fainting and how to criticize the effectiveness of their own actions, the algorithm surpassed more traditional software in complex patient simulations based on physiology. He also closely matched the performance of true anesthesiologists when showing what he would do to maintain consciousness given the recorded data from nine actual surgeries.
Advances in the algorithm increase the ability to keep computers unconscious from using more medications than necessary, thereby relieving anesthesiologists of all other responsibilities they have in the operating room, including making sure patients stay still, feel no pain , remain physiologically stable. , and getting enough oxygen, say co-authors Gabe Schamberg and Marcus Badgeley.
“One might think our goal is similar to an airplane autopilot, where the captain is always in the cockpit and paying attention,” says Schamberg, a former MIT postdoc who is also the study’s correspondent. “Anesthesiologists need to simultaneously monitor many aspects of a patient’s physiological condition, and so it makes sense to automate those aspects of patient care that we understand well.”
Senior author Emery N. Brown, a neurologist at the Picover Institute of Learning and Memory and the MIT Institute of Medical Technology and Science and anesthesiologist at MGH, says the potential of the algorithm to help optimize drug dosage can improve patient care.
“Algorithms like this allow anesthesiologists to maintain more close, near-continuous monitoring of the patient during general anesthesia,” said Brown, a professor of computational neuroscience and medical science and technology at Edward Hood Toplin at the Massachusetts Institute of Technology.
Both actor and critic
The research team developed an approach to machine learning that would not only learn how to dose propofol to maintain patient consciousness, but also how to do so in a way that optimizes the amount of drug administered. They achieved this by endowing the software with two related neural networks: the “actor” with the responsibility to decide how much drug to take at any given time, and the “critic” whose task was to help the actor behave to the maximum “rewards” given by programmers. For example, the researchers experimented with learning the algorithm using three different rewards: one that punished only for an overdose, the other that questioned the provision of any dose, and one that did not impose fines.
In each case, they trained the algorithm with modeling patients who used advanced models of both pharmacokinetics, or how quickly propofol doses reach relevant areas of the brain after dosing, and pharmacodynamics, or how the drug actually changes consciousness when it reaches its destination . The patient’s fainting rate, meanwhile, was reflected in the size of the brain waves, as may be the case in real operating rooms. By conducting hundreds of rounds of modeling with a range of values for these conditions, both the actor and the critic could learn to perform their roles for different types of patients.
The most effective reward system was the “fine dose” system, in which the critic questioned every dose given by the actor, constantly reproaching the actor to reduce the dose to the minimum necessary to maintain loss of consciousness. Without compromising dosing the system sometimes gave too much, and only with a penalty for an overdose sometimes gave too little. The “dose reduction” model was assimilated faster and yielded fewer errors than other value models and traditional standard “proportional integral derivative” controller software.
After learning and testing the algorithm using simulation, Schamberg and Badgley moved the “fine dose” version to a more realistic test, passing it on patient consciousness data recorded from real cases in the operating room. Testing demonstrated both the strengths and limitations of the algorithm.
During most tests, the choice of dosing algorithm was very consistent with the choice of dosage of anesthesiologists after consciousness was induced and before it was no longer necessary. However, the algorithm adjusted the dosage every five seconds, while anesthesiologists (who had many other cases) usually did so every 20-30 minutes, Bagley notes.
Tests have shown that the algorithm is not primarily optimized for inducing consciousness, the researchers admit. The software also doesn’t know on its own when the surgery is over, they add, but managing the process for an anesthesiologist is easy.
According to Schamberg, one of the most important problems that any artificial intelligence system is likely to continue to face is whether the data it provides on patient unconsciousness are absolutely accurate. Another active area of research in Brown’s lab at the Massachusetts Institute of Technology and MGH is to improve the interpretation of data sources, such as brainwave signals, to improve the quality of anesthesia patient monitoring data.
Apart from Shamberg, Badgeley and Brown, other authors of the newspaper are Benjamin Meshede-Kras and Ahun Kwon.
The study was funded by the JPB Foundation and the National Institutes of Health.