Internet2 Applications


On this page:

The Internet2 project has made exciting strides in a wide array of fields, including arts, sciences, and engineering, among others, but the area with the most activity has been within the Health Sciences Initiative. Universities and other organizations all over the country have taken full advantage of new networking technologies and are currently collaborating on projects concerning medical education, virtual reality, and telepathology, all of which require advanced networking capabilities. What follows is a sampling of academically-geared applications currently in developmental stages.

back to top
Health-Oriented Applications:

Virtual Reality:
  • Virtual Pelvic Floor (University of Illinois, Chicago)
    An example of networked computers in action, this project enables users to interact with a 3D pelvic model, converse with and see others in remote locations, and point in three dimensions - all in real-time.


  • Virtual Aneurysm (UCLA)
    This project has improved the accuracy of blood-flow simulations. The fast network allows researchers around the country to access the network-intensive 3D simulation stored on the server at UCLA in real time.


Visualization:
  • Human Embryo Development (George Mason University, Oregon Health Sciences University, National Library of Medicine)
    The purpose of this project is to better understand and communicate information on human embryo development in a 3D visual form, utilizing a network of workstations equipped with advanced technology. High resolution images of the human embryo can be annotated and used for collaborative research, diagnoses, clinical case management, and medical education.


Interactive/Simulation-Based Learning Environments:
  • Renal Physiology Modules (Stanford University)
    The module provides a knowledge base, lab and quiz section concerning real-life applications, glossary, and online office hours chat rooms, all accessible for students in real-time over an advanced, high-speed network.


  • Anatomy & Surgery Workbenches, Local NGI Testbed Network (Stanford University)
    In this project, the network is used to test the workbenches' effectiveness as teaching tools. It links laboratories, classrooms, clinical departments, and medical libraries. Through this testbed network, users have access to 3D workstations, haptic devices, stereoscopic displays, rich media databases, and application program servers, all geared towards the teaching of anatomy and surgical skills.


Telemedicine:
  • Surveyor (University of Wyoming, Wyoming Department of Health)
    Surveyor is a centralized source of health science information available in rural areas for practitioners, students, educators, administrators, and researchers. It acts as a reliable source of critical, geographically-based data.


Medical Consultation/Distance Learning:
  • NLM Testbed for Collaborative Videoconferencing (National Library of Medicine)
    This Internet2 network offers users a wide array of abilities, including MPEG-2 videoconferencing, NTSC quality video feeds, telemedicine, consulting, videomicroscopes, and digital stethoscopes.


back to top
Bioscience-Oriented Applications:

Remote Instrumentation:
  • Telescience Alpha Project (NPACI, NCMIR)
    This allows scientists real-time access to instruments and resources to examine biological specimens over a network; it is also working towards an end-to-end method for electron tomography.


Distributed Computing:
  • Biomedical Informatics Research Network (UCSD)
    The network built in this project links Caltech, UCLA, UCSD, SD Supercomputer Center, Harvard, and Duke so as to share digital MRIs, 3D microscope images, and other data pertinent to understanding brain-related diseases.


Interactive Collaboration:
  • Scientific Collaboration using the Access Grid (Johnson & Johnson)
    This project aims to aid drug discovery by integrating laser capture microscopes (and other instruments) into an interactive environment by using the Access Grid.


back to top
Stanford's Involvement

Though Internet2's success in the medical field relies on the collaboration of many universities across the country, Stanford University's own labs have been making large contributions to the project, namely through the Anatomy and Surgery Workbenches developed by SUMMIT (Stanford University Medical Media and Information Technologies). SUMMIT developed these tools primarily as a teaching tool for medical school students. Now, instead of restricting students to classroom work or passive interaction with subjects, instructors now have the option of letting students touch, feel, and cut digital subjects without the risk of damaging a living subject. Whereas mistakes in dissection or surgery are uncorrectable, mistakes made in the simulator are fixed upon a simple restart.

The research for these projects began in 1998 with the development of simulations (3D visualizations with haptic feedback) for knee arthroscopy, laparoscopy, endoscopy, epidural needle insertion, and sinus endoscopes. These simulations were intended to aid in surgical planning. Because of the high computational demand, these simulations were unsuitable for the internet. Given the development of new, high-speed Internet2 technologies, Stanford began developing two learning environments not only with the intent of making them accessible to others over a network, but with the intent of making it possible to do collaborative manipulation - multiple users manipulating a model in real-time from separate locations on the network. These environments were for teaching anatomy (initially just a hand) and practicing surgical techniques (a pelvis to probe, cut, and suture).

back to top
The Workbenches:
  • Anatomy Workbench - The "composite virtual cadaver" provides its users with rotated views of the hand at different stages of dissection. Photos are taken at intervals of five degrees, the distance by which human eyes are separated. To view the virtual hand, a student wears special glasses that flicker, covering one eye at a time and simulating a real, three-dimensional viewing. In addition to viewing the hand at different levels of dissection, a user can interact with the hand, manipulating the joints and watching the movement of the tendons and muscles.


  • Surgery Workbench - This seeks to make simulated anatomy of the female pelvis palpable through haptic interactions with representations of surgical tools. Each component has specific properties for visualization and haptics that must be modeled to a high degree of accuracy to retain realism. The technology developed must be scalable so that expansions are possible-such as extending the simulation to other parts of the body. The user is able to download portions of the simulation to the remote machine if the server is too slow; this is to retain a simulation speed as close to real-time as possible. The virtual tools were given 5-axes feedback and 5 degrees of freedom, qualities that represent most tools needed for surgery.


back to top
Challenges:

In the simulation server, great emphasis is placed on real-time interaction. When a student presses on a tissue with a tool, the feedback should be immediate, just like in real life. The image display should be updated immediately upon manipulation, or the simulation will seem unrealistic. A number of factors make this challenging, such as image quality, latency, and force vector computations. Every component (muscle, bone, tendon, etc...) of a three-dimensional image is represented by a wire mesh of polygons. The more polygons that make up a component, the more true-to-life it is. As an example, some components of the hand that was generated for the Anatomy Workbench have over one million polygons. Reducing polygon numbers increases the speed with which the image can be processed but also reduces the image's realism. Additionally, each polygon on the grid has an associated force vector. These force vectors must be calculated at least one thousand times per second in order to make the haptic feedback a user feels from the virtual seem realistic. The more calculations a simulator makes per second, the more it slows down the simulation, so compromises must be made in both the cases of the image quality and the haptic feedback quality. The limitations in both cases arise from technological limitations and not the software.

SUMMIT deals with more limitations than just the inability to deal with extremely computationally- intensive operations, such as the ones described above. In transitioning from a single machine to a network, problems of bandwidth, latency, and jitter emerge. Latency is a huge obstacle in achieving real-time interaction with a simulator over a network. The root of the problem is that it is inherent and unavoidable: data can only travel as fast as the speed of light, and light does not travel instantaneously. Ignoring routers and other factors that may slow data transfer, data will never travel a distance without some kind of delay. Thus, the focus becomes reducing latency as much as physically possible. "Real-time" is an impossibility over a network, but if latency is reduced by enough, the slowed simulation may be indistinguishable from a real-time simulation. The calculations that the server must perform are not instantaneous, either, which introduces another form of delay. With each manipulation, the server must update tool positions, the display, and force vectors, and then project them back to the user, which, depending on the computational complexity, can be time-consuming (long enough so that the user can perceive a delay, which destroys the realism of the simulation). Though the user may be able to download a small engine that performs some of the calculations on the local machine or that adjusts latency so that it remains constant, the problem remains unsolved.

back to top
Direct from the Source:

We had the pleasure of talking with Dr. Kenneth Waldron and Dr. LeRoy Heinrichs about their work with Internet2 technology in SUMMIT for first-hand accounts of the work being done at Stanford.

back to top
Kenneth Waldron, PhD
Project Manager for the NGI Team, Research Professor in the Dept of Mechanical Engineering at Stanford University

Dr. Waldron explained some of the limitations of the simulators developed by SUMMIT. He called the soft tissue models "not so good" because of the complex nature of the surfaces. Additionally, he pointed out that it is "quite a chore" to find haptic data for each individual tissue. He described digging through medical literature to find enough data to create a realistic model. However, he said, it is not necessary to achieve "super high fidelity" with haptic data. High fidelity models cannot run in real time given today's technology because of their computational complexity.

In addition to the network-induced problems, the tools themselves may also be problematic. In order to transmit haptic feedback to the user, a force must be transmitted to the biomechanical device that the user is holding. If the system becomes unstable, the device shakes violently, and could potentially injure the user. Dr. Waldron described walking into a lab and seeing a doctor using a simulator - but this doctor had a large brace on one of his arms. Apparently, he'd had an accident with the simulator a few days prior to Dr. Waldron's visit. Dr. Waldron himself has been mildly injured by a haptic feedback device, so there are certainly bugs that still need to be worked out.

When asked about the future applications of the workbenches developed by SUMMIT, Dr. Waldron told us that he was "skeptical" the technology would ever be used to do remote surgeries. It will, he said, be used for remote diagnosis, consultation, and, in the future, teledermatology. Currently, there is a project in the works that is developing a method for dermatologists to feel skin through haptic devices to a high enough degree of realism so that specific problems can be diagnosed. Very high fidelity will be necessary, he said.

Other than these applications, Dr. Waldon thinks that this technology will play an enormous role in teaching students anatomy and surgical skills. He said that "students who use simulators do better." It is quite a jump from a textbook to an operating room, and the simulators help to bridge that gap - but it is not a replacement for real experience.

back to top
LeRoy Heinrichs, MD, PhD
Co-Principal Investigator and Research Affiliate for the NGI Team; Professor of Gynecology and Obstetrics at Stanford Medical School

Dr. Heinrichs explained that this technology developed in SUMMIT is purely for educational purposes. Medically, he said, it's more of a "gee whiz" idea. He happily reported that NGI says that Stanford University has best exploited the new technology and made the most of the funding.

He explained to us in great detail what the process is for creating a 3D model of a human. For the visible human project, 1/3 mm slices were scanned individually to create 14-17 GB of data used to create a high quality model. He'd heard of slices as small as 300 microns used for individual parts of the body. The advantage to using smaller slices is the fact that if large slices are used, very small features might be lost in the scan. For instance, any structure less than 2 mm long will likely be glossed over in a scan with 1 mm slices. Once the visual model is created, he said, then haptic feedback must be enabled. This means assigning force vectors to every spot on the model. Dr. Heinrichs described most of the process as a "trial and error" or "this feels about right" process. He said that a certain degree of realism is lost in this step simply because tissues differ from person to person, within individual tissues, and in different disease stages. It is impossible to model these variations with perfect accuracy. The question he poses is that of "how much realism is necessary?"

back to top
Up Close & Personal: Our experience in the SUMMIT lab

In addition to providing us with valuable verbal information about SUMMIT, Dr. Heinrichs invited us into the lab to experiment with the haptic devices and 3D models. The room with all the computer tools is rather small but packs a big punch. (images) We first experienced a surgical simulation, where a small character on the bottom corner of the screen talked us through a simple step one might encounter in a surgery: grasping an object and putting it back in a specific location. The tools felt incredibly real, and the visuals were very convincing, even though it was just a practice simulation.

Next, we used the iFeelIt training tool, software designed to be used over the internet so that students can get used to the haptic devices used for surgical simulations. 3D objects appear on the screen, and the haptic device is used to feel the texture and shape of the object. After getting used to that, the objects become invisible, and one must make an educated guess as to what the object is based on its feel. It was far more difficult than one might imagine, even though we were given a finite set of object from which to guess.

Lastly, we demoed the polarized glasses and were allowed to explore the Bassett collection, a series of 3D digitized images of a dissected human body. Also, we were allowed to explore the rotating model of the hand that is used in the Anatomy Workbench. The resolution of each image was incredibly fine, and even though we've never seen an actual human body in the process of being dissected, the images we saw looked terribly close to reality.

back to top
Photo Gallery



A simulation engine for teaching needle insertion




The surgical simulator from the Surgery Workbench




An example of tools used in surgical simulations




Dr. Heinrichs explains the basics of the simulator to Nathan




Nathan uses the tools to grasp objects in the simulator




Carly has the simulator explained to her




Concentrating intently on the task at hand




The iFeelIt haptic training tool




Carly tests out the iFeelIt software




Carly, Shaowei, and Nathan sporting the specialized glasses necessary for 3D viewing




Outside of the SUMMIT lab


back to top
Technical Requirements of Applications

The establishment of Internet2 has allowed many new applications to become possible. What features of Internet2 permit such advances, and what unique demands of new applications does Internet2 satisfy? To discuss the technical performance of applications over a network such as Internet2, it is useful to speak of four key performance metrics: bandwidth, latency, packet loss, and jitter.

  • Bandwidth is the maximum amount of data that can be transferred in a given amount of time. It is closely related to throughput, the actual data transfer rate achieved under practical conditions. Throughput is usually less than bandwidth, to a degree dependent on the protocols used and network overhead involved.


  • Latency is a measure of delay, or how long it takes information to travel across a network from source to destination. The effects of latency are often apparent when browsing the web; after clicking a link there may be a brief delay, after which the resulting page loads rapidly. Small delays occur as information passes through each device on its way to its destination and encounters congested areas of the network.


  • Jitter is closely related to latency and is a measure of its variation. Because network conditions constantly change and data does not always take the same path from source to destination, latency can vary. Jitter poses the most problems in streaming applications, when data could potentially arrive out of order. A common solution to this problem, as well as packet loss, is buffering, where the display of streamed data is slightly delayed to ensure that all information has arrived. However, this technique does not work well for applications where true instant streaming is required.


  • Packet loss describes the random loss of pieces of information that never reach their destination due to congestion or poor connections. While some protocols detect when data is missing and request that it be re-sent, this added overhead can decrease the performance of some applications.


back to top
Video Applications

One common use of Internet2 for many applications is videoconferencing. In the medical field, videoconferencing has the potential to enable remote collaboration between researchers and students and remote contact with a surgeon during a procedure. Videoconferencing is available and used on the standard Internet; however, the higher bandwidth available on Internet2 allow higher quality video and audio and more consistent connections.

One application that has been tested over both the standard Internet and Internet2 is the live broadcast of laparoscopic surgery. This procedure was carried out on the standard Internet on a connection between an ISDN line and a dial-up modem in 1997. Overall quality was poor, as the video appeared in a small 320x240 pixel window at just 1-2 frames per second, an average of 17% of audio packets were lost, and delays ranged from 0.5 to 2 seconds.

A similar experiment done in 1998 over an Internet2 connection shows the clear benefits of using a next-generation connection. The additional bandwidth available at both ends of the connection allowed transmission of full-screen TV-quality at a rate close to 2 Mbps. Viewers reported excellent quality, delays were less than 1 second, and packet loss was just 0.1%. Although the standard Internet may allow 2 Mbps transmissions if the end-points have fast enough (multiple-T1) connections, it is unlikely to deliver the additional reliability and lower delays that Internet2 displayed in this test.

In video transmissions, bandwidth is the most important factor as high-quality video requires that much data be transferred every second. When not enough bandwidth is available, the only solution is to reduce the quality of video and audio. Latency is typically not great enough to cause significant problems, and buffering can solve problems of jitter and packet loss as described above.

back to top
Haptics

Many exciting potential uses of Internet2 connections involve using haptic devices. For example, people might use a local haptic device to interact with a model stored on a remote machine, or students might feel pre-programmed motions streamed from a server or instructor at a remote location. Networked haptics creates the challenge of matching the motion of a haptic device as closely as possible with the image displayed. When a user touches a surface on a model, he or she should feel the surface immediately and see the location of the device on the screen instantly without delay.

In order to detect movement and respond with appropriate forces, haptic devices need to update 1000 times per second. One of the biggest challenges remote haptics faces is latency. Any significant delay between the user moving an input device or seeing a change on his or her monitor and feeling the appropriate feedback decreases the quality of simulation.

A study at Stanford Universityís SUMMIT lab used software to simulate various network conditions to test the impact of packet loss, delay, and jitter on an application where pre-recorded motions were played back across a network to a user who would feel the motions through the haptic device in his or her hand. The tests revealed that a latency below 20 ms is necessary for abrupt movements, while a latency up to 80 ms is acceptable for gentle movements. Delays that stretch beyond these boundaries result in a sharp degradation of quality of experience. Any jitter beyond 1 ms also had a strong negative effect on user experience. Packet loss and bandwidth were not issues for this application as the requirements were relatively low at 10 percent and 128 Kbps, respectively.

In this experiment, researchers concluded that the current version of Internet2 cannot yet support this application. Delays were simply too high for smooth haptic feedback: connections on Internet2 between universities in California and Wisconsin experienced a 30 ms latency, while transmissions across the Pacific had delays as high as 85 ms. While Internet2 has certainly improved latency, it is worth noting that applications exist with requirements that even the most state of the art networks cannot yet support.
Conclusions

New applications exist that push the performance limitations of the current Internet. While Internet2ís biggest strength is the large amounts of bandwidth it offers, it can also improve the performance of applications by offering lower latency and packet loss. In addition, the creation of Internet2 as a separate network rather than an addition to the existing Internet is allowing the testing and implementation of new protocols that increase the quality of connections and can potentially guarantee levels of service. Internet2 has made many improvements over the standard Internet and has made new applications possible. At the same time, these new applications have begun to push the limits of even Internet2 and are encouraging network technologies to continually improve.