6 factors to consider while selecting any Algorithm Library

Processing geometric inputs play a crucial role in the product development cycle. Ever since the introduction of complex algorithm libraries, the NPD landscape has changed drastically, and for good. Typically, a well suitable library streamlines the work process by executing complicated tasks using a wide array of functions.

An algorithm library basically works on the principle where it is fed with specific instructions to execute in a way with functionalities customised with it. For example, in manufacturing industry; there is a term known as point cloud library and it holds its expertise in converting millions of point cloud data into mesh models.

There are particular algorithms to perform numerous perplexing tasks. There are platforms that use specific and unique functionalities and programming to get the job done. Manufacturing requirements, end product objectives lay down the necessities for choosing a particular algorithm library. This article sheds a light on 6 key factors to consider while selecting any algorithm library.

Required functionality

Once data has been fed and stored, methods for compressing this kind of data become highly interesting. The different algorithm libraries come up with their own set of functionalities. Ideally, functionalities are best when developed by in-house development team, to suit up in accordance with design objectives. It is a good practice to develop functionalities to address complex operations as well as simple tasks. It is also essential to develop functions which might be of need down the line. In the end, one’s objective defines what functionality laced algorithm library will be in use.  

Data Size and Performance

A huge data can be challenging to handle and share between project partners. A large data is directly proportional to a large processing time. All the investments in hardware and quality connections will be of little use if one is using poor performing library. An algorithm library that allows for the process of multiple scans simultaneously has to be the primary preference. One should also have a good definition of the performance expectations from the library, depending on your application whether real time or batch mode.

Processing speed

Libraries that automate manual processes often emphasize on processing speed, delivering improvements to either the processing or modeling. This allows for faster innovation and often better, yet singular, products. As witnessed in the case of point cloud, the ability to generate scan trees after a dataset has been processed greatly improves efficiency. A system will smooth interface that permits fast execution, greatly reduces the effort and time taken to handle large datasets.

Make versus Buy

This situation drops in at the starting phases of processing. Let us take an example of point cloud libraries. Some of the big brands producing point cloud processing libraries are Autodesk, Bentley, Trimble, and Faro. However, most of these systems arrive as packages with 3D modelling, thereby driving up costs. If such is the case, it is advisable to form an in-house point cloud library that suits the necessities. Nowadays, many open source platforms give out PCL to get the job done which has proven to be quite beneficial.

Commercial Terms

The commercial aspect also plays a vital role in while choosing an algorithmic library. Whether to opt for single or recurring payment depends upon the volume and nature of the project.  

There are different models to choose from, if one decides to go with licensing a commercial library:

A: Single payment: no per license fees, and an optional AMC

B: Subscription Based: Annual subscription, without per license fees

C: Hybrid: A certain down payment and per license revenue sharing

Whatever option you select, make sure there is a clause in the legal agreement that caps the increase in the charges to a reasonable limit.

Storage, Platforms and Support

Storage has become less of an issue than what it was even a decade ago. Desktops and laptops with more than a terabyte of capacity are all over the market. Not every algorithm library requires heavy graphics. Investing in a quality graphics card is only important if your preferred library demands heavy graphic usage. That doesn’t mean investing in cheap hardware and storage systems available. A quality processor with lot of RAM is decent if the processing task is CPU and memory intensive. Another point to look into, is the type of platform or interface to be exact, the algorithm library supports. Varied requirements call for varied platforms such as Microsoft, Mac, and Linux. The usage, and licensing should be taken into account before selecting an interface.

Last but not the least, it is to mention that the inputs from customers are highly significant and there has to be a robust support system to address any grievance from the customer side. Having a trained support staff or a customised automated support system must be given high priority.

Read More

Five points to consider for a CAD Software Development Process

In any software development process, the methodology involved is more or less the same. The most generic requirements are developers, preferred programming language, testers and carefully planned set of actions to perform. The same can be applied to development of CAD software as well.

Having CAD software that can actually meet product development needs is an obvious necessity. Although, there is a lot of common ground between a CAD software development project and a regular software development project, there are criteria very specific to CAD software development projects which needs to be addressed.

Let us take a walkthrough:

  • Acceptance Criteria
    Acceptance criteria are a list that mentions user requirements and product scenarios, one by one. Acceptance criteria explain conditions under which the user requirements are desired, thus getting rid of any uncertainty of the client’s expectations and misunderstandings. However, defining acceptance criteria is not simple, but has its complications. Also, it is not convenient to expect a 100% pass rate. In such case, an ideal way is to have a set of test cases with defined input and output data.
  • Algorithmic Complexities
    To successfully develop a complex product, two critical questions must be answered, how to develop the right product and how to develop the product right.Unlike some of the other problems like interest rate calculations or workflow management system, there is not a defined set of steps that results in the final answer. There are often multiple algorithms for a single issue and the situation becomes more complicated when a particular algorithm, deemed to be perfect for a given situation, may not perform that well in a different scenario, which often leads to trade offs.
  • Tolerances
    Tolerance is one of the factors to evaluate product quality and cost. It has a significant role. As tight tolerance assignment ensures design requirements in terms of function and quality, it also induces more requirements in manufacturing, inspection and service that results in higher production cost. Most of CAD data works on variables that are doubles and floats and Floating point precision, Tolerance plays a very important role in the algorithms. When using data from other systems say STEP file from other source, if there is a mismatch in the tolerance, the destination system can cause lot of issues.
  • Risk of Regression
    Adding a new functionality or improving an algorithm always has a risk of impacting the test cases that were working before the fixes. One should always develop a robust test suite for catching regressions while carrying out testing. To create a regression test case suite one must have thorough application knowledge or complete understanding of the product flow.
  • Interoperability
    The quick emergence of varied CAD software has led designers to democratize, leading to the usage of multiple CAD systems in the design process, thus challenging the CAD interoperability aggressively. Different suppliers require different CAD platforms. It depends on many factors, primarily the nature of the task and product upon which it has to work. Merging different CAD data together without affecting the design intent is quite the hassle. Although, a lot of software these days support different CAD files, there are instances, where the particulars of a project has made the product confined to that one CAD software. Interoperability eases up extra work and whether to make your own software compatible with other, is a decision that should be seriously taken into account.
Read More

New Application Development - Addon vs Standalone

At the starting phase of developing an application, the primary question usually comes like this—whether to make it an add-on application or a standalone application?

Before we get into the details, we need to understand what exactly these two terms mean in the computing world.

An add-on (also known as addon or plug-in) is a software application, which is added to an existing computer program to introduce specific features.

As per Microsoft’s style guide, the term add-on is supposed to represent the hardware features while add-ins should be used only for software utilities, although these guidelines are not really followed as terms are mixed up quite often. When a program supports add-ons, it usually means it supports customization. Web browsers have always supported the installation of different add-ons to suit the tastes and topics of different users by customizing the look and feel of that particular browser.

There are many reasons for introducing add-ons in computer applications. The primary reasons are:

  • To introduce abilities to extend an application by enabling third-party developers to create variety of add-ons
  • To support new features
  • To reduce the size of an application
  • To separate source code from an application because of incompatible software licenses

Usually, the host applications operate independently, which makes it possible for developers to add and update add-on features without impacting the host application itself. A host application doesn’t depend on add-ons but on the contrary, an add-on depends on the host application. Add-ons usually depend on the services provided by the host application and cannot operate by themselves.

A Standalone application is the type software that doesn’t comes bundled with other independent software features. In simple words, a standalone application does not require any separate software to operate.

A stand-alone application deploys services locally, uses the services, and terminates the services when they are no longer needed. If an application does not need to interact with any other applications, then it can be a stand-alone application with its own exclusive local service deployment. Services locally deployed by this application are not available to any other application.

A standalone application needs to be installed on every system which makes it hard to maintain. In the event of a system crash or a virus attack, when a system needs to be replaced or reinstalled, the application also needs to be reinstalled. The access to the application is limited only to the systems that have the application installed.

Standalone applications can never be kept online, and remote availability of data is practically impossible. However, there are situations where standalone application is the best choice. Here are a few:

  • Text mode printing on pre-printed stationary, which browsers fail to do
  • Where data security is very high and you don’t want the data to travel on the wire at all
  • Design applications which need very high responsiveness and at the same time work on big data structures
  • Printing on legacy continuous stationery
  • No need of networking, application is needed only on a single system
  • More hardware support like barcode printers, webcam, biometric devices, LED Panels, etc.
  • More Operating System level operations like direct backup to external devices, mouse control, etc.
  • Creation and manipulation of local files
Read More

Points to consider while developing regression suite for CAD Projects

As the development of software makes its progress, there comes a stage where it needs to be evaluated before concluding it as the final output. This phase is usually known as testing. Testing detects and pinpoints the bugs and errors in the software, which eventually leads to rectification measures. There are instances where the rectifications bring in new errors, thus sending it back to another round of testing, hence creating a repeating loop. This repeated testing of an already tested application to detect errors resulting from changes has a term — Regression Testing.

Regression testing is the selective retesting of an application to ensure that modifications carried out has not caused unintended effects in the previously working application.

In simple words, to ensure all the old functionalities are still running correctly with new changes.

This is a very common step in any software development process by testers. Regression testing is required in the following scenarios:

  • If the code is modified owing to changes in requirements
  • If a new functionality is added
  • While rectifying errors
  • While fixing performance related issues

Although, every software application requires regression testing, there are specific points that apply to different applications, based on their functioning and utility. Computer-Aided design or CAD software applications require specific points to keep in mind before undergoing regression testing.

Regression testing can be broadly classified into two categories, UI Testing and Functionality Testing. UI testing stands for User Interface which is basically testing an applications graphical interface. Numerous testing tools are available for carrying out UI testing. However, functional testing presents situation for us. This content focuses on the points to take care while carrying out functional regression testing.

Here are most effective points to consider for functional regression testing:

  • It is important to know what exactly needs to be tested and the plans or procedures for the testing. Collect the information and test the critical things first.
  • It is important to be aware of market demands for product development. Document or matrix should be prepared to link the product to the requirement and to the test cases. Matrices should be modified as per the changes in requirement.
  • Include the test cases for functionalities which have undergone more and recent changes.
    It’s difficult to keep writing (modifying) test cases, as the application keeps on getting updated often, which leads to some internal defects and changes into the code which in turn might break some already existing functionalities.
  • It is preferred to run the functionality testing in the background mode (non-UI mode) because often it is faster and eliminates problems associated with display settings on different machines.
  • One needs to lay down precise definitions of the output parameters that are of interest. Anything from the number of faces, surface area, volume, weight, centre of gravity, surface normal, curvature at a particular point etc. It is always a good idea to have a quantifiable output parameter that can be compared.
  • It is often advisable to develop a utility to write the parameters that are of interest in an output file it could be text, CSV or xml file.
  • Creating baseline versions of output data files is a good idea to visually see every part for which the baseline data is created.
  • Developing automation script enables the entire test suite to run without any manual intervention and the results can be compared.
  • Compare the output data generated with the baseline version, for every run of test case, for it is very important to keep in mind that if there are doubles or floats in the output data, tolerance plays a very important role.
  • Some areas in the application are highly prone to errors; so much that they usually fail with even a minute change in code. It is advisable to keep a track of failing test cases and cover them in regression test suite.

Failure to address performance issues can hamper the functionality and success of your application, with unwelcome consequences for end users if your application doesn’t perform to expectations.

Read More