1. A server can usually load image data from the archive or cache much faster than the client can. Servers are located in the data center which can have 1Gb/s+ bandwidth between each other. A client's bandwidth will often be much lower especially if it is not in the same physical building. In many cases a client can have at least 15 Mb/s which is still 66x slower than the datacenter. To put this into perspective, a 1000 slice CT volume would load in ~ 4 seconds over a 1 GB/s connection but would take over 4 minutes over 15 MB/s. The faster load time is perhaps the best reason to do server side rendering for volumetric data sets.
2. The result image is typically much smaller than the source image. For example, a 1000 slice CT volume is 500 MB of data, yet a MPR or Volume Rendered view of that data in JPEG may only be 5 kB in size. Another example is a 2D projection radiograph is often 30 MB in size, yet a rendered JPEG of that image may only be 30 kB in size.
3. Minimal client side hardware dependencies. Volume rendering and MPR often require higher end hardware than is often found on client machines. For example, a GPU based rendering engine often requires specific GPU cards and drivers to function which are not usually found on enterprise desktop PCs.4. Minimal client side software installation. Medical imaging software can be difficult to install to the enterprise due to the variety of operating systems and hardware found on enterprise PCs. One strategy to deal with this is to install the software on a machine in the datacenter and allow a client to access it via a remote desktop protocol like VNC or RDP.
While server side rendering sounds great, interactively suffers dramatically when latency is present. While bandwidth is generally not an issue for medical imaging today, latency still is. If you think about how server side rendering works, the client has to send user activity (mouse, keyboard presses) to the server where it gets processed and then sends back the resulting image. The time it takes to send a message from the client to the server and then back again is called latency. In the enterprise, latency is usually quite low - around 1 ms. Low latency like this provides a highly responsive user interface that many users cannot distinguish between local client side rendering. Unfortunately many users need to access images outside of the enterprise where latency quickly becomes an issue. Assuming server side rendering is instance (which is not necessarily the case), you get the following latency to best case frame rates:
25 ms ping = 40 FPS
50 ms ping = 20 FPS
100 ms ping = 10 FPS
In a MAN or WAN you may have 25-50ms which will meet most users needs. Once you get outside of the MAN/WAN (e.g. across the country or planet), latency quickly jumps up to the 80-150 ms range. This is a major issue for remote users as well as any cloud based system that uses server side rendering. The major cloud vendors such as Amazon and Azure do have a number of data centers that are geographically located throughout the country (and world). It is possible to deploy a server to each datacenter and connect the client to the server with the lowest ping times. This doesn't happen automatically though and is something to consider when looking at any cloud based solution.
Veduta Design
ReplyDeleteGreat article! We recently discussed the pros and cons of Client-Side and Server-Side Rendering, focusing on SEO and performance impacts. Check out our blog post Click on the link below for more insights on choosing the right approach!
ReplyDeletehttps://shorturl.at/1RGNj