Quick HOOPS Communicator Overview
HOOPS Communicator consists of three main components, the Web Viewer, the server, and the data-authoring tools
- Web GL Viewer: This component is embedded in the web browser on the client side and is responsible for displaying CAD data, PMI views, attributes, measurements, data markup, and more.
- Server: Often referred to as a single component, it is comprised of three different parts, the Streaming Cache Server, the HOOPS Server, and the File Server
- HOOPS Streaming Cache Server: Reads stream cache data on the server and streams it down to the viewer. Stream cache is our file format that enables high-performance rendering with large data sets.
- HOOPS Server: Responsible for orchestrating the stream cache servers on the backend.
- File Server: Primarily for convenience so that our users can get up and running. Many will replace this with something more standard like Apache or Nginx.
- Data Authoring Tools: Tools that allow you to ingest data into the system, such as CAD or polygonal data
- Binary: We deliver a binary called Converter which most of our users take advantage of. It includes a bunch of command line options so you can pick and choose what you want from the CAD data.
More on HOOPS Servers
Since we serve a lot of different industries, we try to make our tools as flexible as possible. Because of this, it can be confusing for new users to know what their options are when starting out. Here are some quick notes to keep in mind:
- You can use a web viewer without server components at all, which we call SCS viewing and loading. Basically, you just load the SCS file directly into the Web Viewer at runtime via an asynchronous fetch.
- The majority of our users utilize client-side rendering, which only requires the Stream Cache Server component.
- Your application can choose to manage the lifecycle of the stream cache server instances on its own. Many users don’t want to manage the lifecycles of those stream cache servers, so they use the HOOPS Server instead
- We don’t recommend you go into production with the provided file server, since it’s mainly meant to help you with initial development.
What’s the problem?
The Stream Cache server, or ts3d_sc_server, listens on its own port. We did this so each application can dynamically allocate resources. So, if a user joins it starts up a new stream cache, and when the session is done the stream cache realizes that the web sock connection is disconnected, and it shuts itself down. In most web applications, standard web traffic is limited to ports 80 and 443 (SSL) and most web applications only want to keep their standard ports option. If we were to open 200 ports for 200 concurrent users, we would be leaving a lot of non-standard ports open, which may be difficult to get your IT team to agree to.
What is a reverse proxy?
A reverse proxy is an intermediary between the client and your backend servers or services. So, when web requests come in, the reverse proxy intercepts that connection and then decides to do something. It could serve static content, rewrite your URL, and direct that traffic to a backend server.
In this post, we’ll be talking about IP masking, or reverse proxying. We do recommend, if you’re building a web architecture that you do a bit of research on what reverse proxies are used for, what they can be used for, and what you might use them for. Additionally, it may be useful for you to look at some of the common software that’s used. For this example, we’re going to be using nginx. Nginx is widely used, it’s a very popular, open-source tool. We’ll be using the Docker instance of that, but Apache can do this.
Web Architectures
When it comes to building web architecture, there are various approaches and factors to consider. The use case, expected concurrent users, data size, and usage patterns all play a role in determining the architecture. It’s important to assess the available tools provided by cloud service providers like Azure, Google Cloud, and AWS.
To plan the architecture, it’s recommended to start by outlining it on paper and then explore the tools offered by the chosen cloud service provider. Load balancers, reverse proxies, container management, auto-scaling groups, and Kubernetes are some of the technologies that may come into play. It’s easy to get overwhelmed, so it’s advisable to start with a simple architecture, build a proof of concept, and adjust based on feedback and usage patterns.
If you need help with planning out your web architecture, feel free to either post here in the forum or reach out to the consulting team directly.
Web Architecture Example
Take a look at this example setup. We’ve got our external users, Janet, Karen, and Bradford.
They’re connecting to our web application via a web server. That web server is going to go through to the proxy, and then the proxy is going to decide what type of web connection came to me, and it’s going to make a decision there. Either it’s going to go to the web server through the API if we’re using the handshake mode, or it’s going to ask the web server for a URI for a server that’s been started that it can connect to.
So in that case, we’re going to connect to Nginx here, we’ll connect to the HOOPS Server, which will start an stream cache server up on the backend and then reply back to the reverse proxy with the URL that needs to come back out, which will then come back, be passed back out to our web server web application, which will then start the web tool. Again, it’ll connect through the proxy to that backend server so that we’re all going over standard ports.
A quick note, before we move on, everything is laid out as discreet components here. You’re going to see today that my web server and my reverse proxy are one and the same. We’re using Nginx to do both. You could do that in production, or you could use Nginx for your routing or another technology for your web server. It’s up to you. You may even have two instances of Nginx running one on a physical machine and one on another machine and acting as a proxy.
Also, please note that we are using two different ways of doing things in this diagram. One is the rest handshake mode, and the other is the web socket proxy mode. If we were to just put that down on paper, that would look something like this. So, we’d have our end users coming through to the web server, which would go through to the proxy.
In this case, now our reverse proxy is literally only proxying one port, and it’s going to proxy to the HOOPS server on the back end. You don’t even really need to do this. You could leave the hoop server port open. In that case you’d have port 80 443 and 1182 to the HOOPS server open, and you would then not need a reverse proxy here. However, this would mean that your HOOPS Servers are open to the public which means anyone could ping or access that server. The ideal setup is having everything past the reverse proxy behind a firewall and only accessible through the proxy.
The proxy works by going straight to the HOOPS Server and instead of making a REST call to give us a URL to a running Stream Cache Server it starts up the server for me and connects us straight through.
In this instance, we proxy twice. We’re proxying our connection from the web server, which is going to the HOOPS Server with just one URL for all your connections through there. The HOOPS Server manages, starts, and stops these and then proxies the connection back through here, which proxies back to the web server.
Exercise
During this exercise in the video, we’ll walk you through a sample set-up we’ve built using Docker, Nginx, and HOOPS Communicator. You’ll also find the sample code in the zip file below.
reverseProxy.zip (9.6 KB)