抄録
Due to recent advances in computer-related technology, almost all medical images are now controlled by computers, and the performance of diagnostic and therapeutic devices that use them has come to depend on the software that controls them. In this context, a new rev-olution is taking place in the technology that feeds the results of computations performed by computers into human vision. Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are representative of these technologies, and their widespread use is re-markable. The author has applied these applications to spine surgery, and developed a simulator for vertebral root screw insertion and an image-assist device that allows the user to confirm the anatomy of the individual patient during surgery. The simulator represents the operating table and three-dimensional anatomy in a virtual space, and the user can insert a screw while observing the necessary anatomy in any cross section while moving freely in the space. The intraoperative image support system allows the surgeon to set the screw trajectory before surgery and project the trajectory onto the surgical field. These techniques not only improved the surgeon’s three-dimensional understanding of the anatomy and spatial awareness, but were also useful for medical education regarding surgical techniques. In addition, online conferencing (the metaverse) using XR made it possible to share implant and bone organ data in a virtual space with AR markers to share the avatar’s position while performing surgery and conferencing with remote sites.