Authoring Shattered Models in HOOPS Communicator (libsc)

One of HOOPS Communicator’s strongest capabilities is its ability to stream, render, and quickly interact with massive models or data sets. These models may be comprised of hundreds of millions of triangles, hundreds of assembly nodes, attribute rich parts, and much more that extends beyond vertices. HOOPS Communicator’s WebViewer and Stream Cache streaming server intelligently organize and manage this data to ensure an unbeatable user experience with near-immediate interaction with these very large models and assemblies.

But what about getting the data into this architecture? To leverage HOOPS Communicator, model files need to be converted and stored as a Stream Cache file, which can add some unwanted overhead to your workflow. Do we always need to wait to process all this data? Is this process as user experience friendly or efficient with large models as the respective viewing side is?

Yes, it can be. Oftentimes, these large models are complex assemblies, comprised of many individual parts or files. In order to improve processing times, support individual part updates, or manage parts separately, we can leverage the ‘shattered’ model capability in Stream Cache. Shattered mode refers to the ability to split a CAD assembly into separate files and (largely) retain its original hierarchy. This approach to storing CAD data allows for updating of individual components of a larger model without having to reconvert or reauthor the whole assembly.

While there is available documentation on using the HC Converter app or library to prepare your models for shattered mode here, there is not much information in terms of using shattered authoring in the libsc authoring library. You can feed your HOOPS Exchange supported file formats into HC Converter and process them in a shattered manner, but there is limited information on how to build this shattered assembly from scratch while authoring your own data.

The idea is relatively straightforward. In a traditional monolithic authoring workflow, you would open a new SC::Store::Cache, insert data into a SC::Store::Model object (vertices, normals, colors, etc), associate that data to nodes in your assembly tree (if desired), and prepare the model for streaming. In a shattered workflow, those major steps are the same, except that we create a ‘master’ assembly tree, and individual ‘external’ models that this tree references. You can think of the master assembly tree as a skeleton assembly file, that simply references other Stream Cache models, rather than authoring all Stream Cache data in a single model object itself. This way, all external models associated with this assembly can be processed independently of the others, but still retain the hierarchy of the entire model.

To start, we will initialize a new session with the libsc authoring library, and create our ‘master’ assembly tree.

SC::Store::Database::SetLicense(HOOPS_LICENSE);
SC::Store::Cache cache = SC::Store::Database::Open(logger);

// Author assembly tree : Create root node.
SC::Store::AssemblyTree master_assembly_tree(logger);
uint32_t master_root_id = 0;
master_assembly_tree.CreateAssemblyTreeRoot(master_root_id);
master_assembly_tree.SetNodeName(master_root_id, "Shattered Model AsmTree Root");

With this created, we can move onto authoring individual external models that we can reference as nodes off this master tree. Each external model can have its own assembly tree associated with it, and when it is brought into the master hierarchy, that external assembly tree data will be retained.

For each external model, we need to open a new Model object and make it known to the Cache.

if (cache.Exists(modelpath.c_str()))
{
   std::filesystem::remove_all(modelpath);
}
auto model = cache.Open(modelpath.c_str());
AuthorExternalModel(model, nInstances, assemblytree, offset, nExtModels * 2);

In the code above, I open a new model and delete the existing one (if applicable - you could reopen the same model as well if that meets your needs better). I then create a model of N cube instances (don’t worry about the offset or spacing parameters - just my brute force way of displacing each cube instance with N number of models). The details of authoring your data is outside this topic, but the AuthorExternalModel function is shown below to demonstrate how all Model authroring is independent of other models or the master assembly tree. The code authors its own data into its own model, makes its own assembly tree, and then saves that model into its own Stream Cache directory. Technically, at the conclusion of this function, you would have a streamable individual external model.

void AuthorExternalModel(SC::Store::Model &model, int nCubes, bool assemblytree, int offset, int spacing)
{
    Timer timer;
    ApplicationLogger logger;
    timer.Start(("Authoring External Model with " + std::to_string(nCubes) + " instances."));
    SC::Store::AssemblyTree model_assembly_tree(logger);
    uint32_t root_id = 0;

    if (!model_assembly_tree.CreateAssemblyTreeRoot(root_id))
    {
        std::cerr << "Failed to create external model assembly root." << std::endl;
    }
    if (!model_assembly_tree.SetNodeName(root_id, "External Model Root"))
    {
        std::cerr << "Failed to set external model root node name." << std::endl;
    }
    uint32_t model_node = 0;
    if (!model_assembly_tree.CreateChild(root_id, model_node))
    {
        std::cerr << "Failed to make external model child node off root." << std::endl;
    }
    if (!model_assembly_tree.SetNodeName(model_node, "External Model"))
    {
        std::cerr << "Failed to set external model child node name." << std::endl;
    }

    SC::Store::InclusionKey model_incKey = model.Include(model);

    int x = 0;
    int y = offset;
    int z = 0;

    std::random_device rd;  // Will be used to obtain a seed for the random number engine
    std::mt19937 gen(rd()); // Standard mersenne_twister_engine seeded with rd()
    std::uniform_real_distribution<> dis(0.0, 1.0);
    SC::Store::Color color = SC::Store::Color(dis(gen), dis(gen), dis(gen), 1);
    SC::Store::MaterialKey material_key = model.Insert(color);

    printf("Creating Cube Mesh Object...\n");
    SC::Store::MeshKey cubeMeshKey = CreateCubeMesh(model);

    printf("Creating %i Mesh Instances...\n", nCubes);
    auto limit = cbrt(nCubes) * (spacing / 4);
    for (int i = 0; i < nCubes; i++)
    {

        SC::Store::Matrix3d matrix;
        matrix.SetIdentity();
        matrix.Translate(x, y, z);
        // Determine translation values for next instance
        if (x > limit)
        {
            x = 0;
            y += spacing;
        }
        if (y > (limit + offset))
        {
            y = offset;
            z += 2;
        }
        else
        {
            x += 2;
        }

        SC::Store::MatrixKey matrix_key = model.Insert(matrix);

        SC::Store::InstanceKey instance_key =
            model.Instance(cubeMeshKey, matrix_key, material_key, material_key);

        if (assemblytree)
        {
            // Author assembly tree : Create child node.
            uint32_t child_node = 0;
            model_assembly_tree.CreateChild(model_node, child_node);
            // Create body instance node.
            uint32_t body_instance_node = 0;
            model_assembly_tree.CreateAndAddBodyInstance(child_node, body_instance_node);
            // Register mesh instance.
            model_assembly_tree.SetBodyInstanceMeshInstanceKey(
                body_instance_node, SC::Store::InstanceInc(model_incKey, instance_key));
            // Register matrix.
            model_assembly_tree.SetNodeLocalTransform(child_node, matrix);
        }
    }

    if (!model_assembly_tree.SerializeToModel(model))
    {
        std::cerr << "Failed to serialize model tree of external model." << std::endl;
    }
    model.PrepareStream(SC::Store::CompressionStrategy::Fast);
}

If I launch an WebViewer and request this model, you will see a single model of 1000 cube instances of a given randomly assigned color. It has it’s own assembly hierarchy, and for all intents and purposes, is a self-contained model.

We can repeat this authoring for as many external models as we like. In this example, I will author 10 different external models. Once you have authored all of your ‘external’ models, you then need to have the master assembly tree point to them and include them. We do this using the SetExternalModel function on the AssemblyTree class. I have abstracted this out to this own function.

void addExternalModelToMasterAssembly(SC::Store::AssemblyTree &master_assembly_tree, uint32_t master_root_id, std::string externalModelname)
{
  uint32_t master_child_id = 0;
  if (!master_assembly_tree.CreateChild(master_root_id, master_child_id))
  {
      std::cerr << "Failed to create master model child node." << std::endl;
  }
  master_assembly_tree.SetNodeName(master_child_id, (externalModelname + " Reference Root Node").c_str());
  if (!master_assembly_tree.SetExternalModel(master_child_id, externalModelname.c_str()))
  {
      std::cerr << "Failed to set master model external model reference." << std::endl;
  }
}

Once all external models have been referenced, we can build the entire master model by calling BuildMasterAssemblyModel, passing in the cache where our external models live, and providing an output name for the model.

// Link each model authored to nodes of the master assembly tree
for (int iModel = 1; iModel <= nExtModels; iModel++)
{
    auto modelname = "ExternalModel" + std::to_string(iModel);
    addExternalModelToMasterAssembly(master_assembly_tree, master_root_id, modelname.c_str());
}

if (cache.Exists((input_model_path + model_name).c_str()))
    std::filesystem::remove_all((input_model_path + model_name));
if (!master_assembly_tree.BuildMasterAssemblyModel(input_model_path.c_str(), (input_model_path + model_name).c_str(), nullptr, false))
{
    std::cerr << "Failed to build master model assembly." << std::endl;
}

When BuildMasterAssemblyModel is called, it looks for all external models that were referenced with SetExternalModel. It will look in the cache for the model name you provided with SetExternalModel, and include that in the final model.

After we have authored the external models and built the final assembly, our cache directory looks like this:
image

As we have seen before, we could load any one of these individual models, but now we can also see a ‘shattered_model’ file.

Once the master model has been built, you can then load this model, which will reference all other external Stream Cache models. If in the future you needed to update a single part, you could update just that single part, and the master model will pull in those changes and leave all other parts untouched.

Let’s now reference the shattered model in the WebViewer.

You can see the original orange ExternalModel1 data we loaded earlier, but now we also have the other 9 models brought into one assembly. You can also see in the assembly tree, the external model assembly data is retained.

In a future post, we will see how we can leverage this shattered authoring approach to process our models in a concurrent way, and leverage significant performance gains by simultaneous processing of model parts in an assembly.

3 Likes