Processing geometric inputs play a crucial role in the product development cycle. Ever since the introduction of complex algorithm libraries, the NPD landscape has changed drastically, and for good. Typically, a well suitable library streamlines the work process by executing complicated tasks using a wide array of functions.
An algorithm library basically works on the principle where it is fed with specific instructions to execute in a way with functionalities customised with it. For example, in manufacturing industry; there is a term known as point cloud library and it holds its expertise in converting millions of point cloud data into mesh models.
There are particular algorithms to perform numerous perplexing tasks. There are platforms that use specific and unique functionalities and programming to get the job done. Manufacturing requirements, end product objectives lay down the necessities for choosing a particular algorithm library. This article sheds a light on 6 key factors to consider while selecting any algorithm library.
Once data has been fed and stored, methods for compressing this kind of data become highly interesting. The different algorithm libraries come up with their own set of functionalities. Ideally, functionalities are best when developed by in-house development team, to suit up in accordance with design objectives. It is a good practice to develop functionalities to address complex operations as well as simple tasks. It is also essential to develop functions which might be of need down the line. In the end, one’s objective defines what functionality laced algorithm library will be in use.
Data Size and Performance
A huge data can be challenging to handle and share between project partners. A large data is directly proportional to a large processing time. All the investments in hardware and quality connections will be of little use if one is using poor performing library. An algorithm library that allows for the process of multiple scans simultaneously has to be the primary preference. One should also have a good definition of the performance expectations from the library, depending on your application whether real time or batch mode.
Libraries that automate manual processes often emphasize on processing speed, delivering improvements to either the processing or modeling. This allows for faster innovation and often better, yet singular, products. As witnessed in the case of point cloud, the ability to generate scan trees after a dataset has been processed greatly improves efficiency. A system will smooth interface that permits fast execution, greatly reduces the effort and time taken to handle large datasets.
Make versus Buy
This situation drops in at the starting phases of processing. Let us take an example of point cloud libraries. Some of the big brands producing point cloud processing libraries are Autodesk, Bentley, Trimble, and Faro. However, most of these systems arrive as packages with 3D modelling, thereby driving up costs. If such is the case, it is advisable to form an in-house point cloud library that suits the necessities. Nowadays, many open source platforms give out PCL to get the job done which has proven to be quite beneficial.
The commercial aspect also plays a vital role in while choosing an algorithmic library. Whether to opt for single or recurring payment depends upon the volume and nature of the project.
There are different models to choose from, if one decides to go with licensing a commercial library:
A: Single payment: no per license fees, and an optional AMC
B: Subscription Based: Annual subscription, without per license fees
C: Hybrid: A certain down payment and per license revenue sharing
Whatever option you select, make sure there is a clause in the legal agreement that caps the increase in the charges to a reasonable limit.
Storage, Platforms and Support
Storage has become less of an issue than what it was even a decade ago. Desktops and laptops with more than a terabyte of capacity are all over the market. Not every algorithm library requires heavy graphics. Investing in a quality graphics card is only important if your preferred library demands heavy graphic usage. That doesn’t mean investing in cheap hardware and storage systems available. A quality processor with lot of RAM is decent if the processing task is CPU and memory intensive. Another point to look into, is the type of platform or interface to be exact, the algorithm library supports. Varied requirements call for varied platforms such as Microsoft, Mac, and Linux. The usage, and licensing should be taken into account before selecting an interface.
Last but not the least, it is to mention that the inputs from customers are highly significant and there has to be a robust support system to address any grievance from the customer side. Having a trained support staff or a customised automated support system must be given high priority.
In November 2014, E-commerce giant Amazon announced the launch of Alexa, a voice-controlled virtual assistant whose task is to transform words into action. It caught the attention of tech enthusiasts and the general populace alike. The inclusion of Samuel L. Jackson’s voice in Alexa was the talk of the tech town.
Recent years have witnessed a climactic change in the way technology interacts with humans. Alexa happens to be just that one card out of the deck. From Tesla’s cybertruck to internet giant Facebook’s EdgeRank and Google’s PageRank has called for both awe and a little bit of commotion within the tech community. The driving force behind such innovations can be put under a single umbrella term — Artificial Intelligence or AI.
Artificial intelligence (AI) can be defined as — the simulation of human intelligence in machines, especially computer systems and robotics. The machines are programmed to think and mimic human actions such as learning, identifying, and problem-solving.
Although AI has burst into the scene nowadays, the history of AI goes way before the term was first coined. It is safe to say that the principle is derived from the Automata theory and found references in many storybooks and novels. Early ideas about thinking machines emerged in the late 1940s to ’50s by the likes of Alan Turing or Von Neumann. Alan Turing famously created the imitation game,
now called the Turing Test.
After initial enthusiasm and funding on machine intelligence until the early 1960s,entered a decade of silence. It was the period of reduced interest and funding on research and development of AI. This period of decline is known as ‘AI Winter.’ Commercial ventures and financial assistance dried up and AI was put on hibernation for the said period.
The late 1970s witnessed a renewed interest in AI. American machine learning pioneer Paul Werbos devised the process of training artificial neural networks through backpropagation of errors. In simple terms — Back Propagation is a learning algorithm for training multi-layer perceptrons, also known as Artificial Neural Networks.
The neural networks consist of a set of algorithms that loosely mimics a human brain. It means much like a human brain; it is designed to interpret sensory data, cluster raw inputs, and classify them accordingly.
1986 saw the backpropagation gaining widespread recognition through the efforts of David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams. In 1993, Wan became the first person to win the international pattern recognition contest with the help of the backpropagation process.
Since the emergence of computers and artificial intelligence, computer scientists have drawn parallels between these intelligent machines and human minds. The comparison reached a pinnacle when, in 1997, an information technology company, IBM, created a computer known as Deep Blue to participate in a chess match with renowned chess master Gary Kasparov. The match went on for several days and received massive media coverage. After a six-game match, Gary Kasparov secured a win, Deep Blue secured two wins and rest three draws. The highlight of the spectacle, however, was the ability of machines to push forward the boundaries and lay down a new benchmark for computers.
Deep Blue made an impact on computing in many different industries. It enabled computer scientists to explore and develop ways to design a computer to tackle complex human problems with the help of deep knowledge to analyze a higher number of possible outcomes.
The rise in popularity of social media with Facebook saw the implementation of AI/ML in a wide array of applications. One prominent characteristic was the use of DeepFace. As the name suggests, DeepFace is a deep learning facial recognition system designed to identify human faces in digital images. DeepFace was trained on four million images uploaded by Facebook users and is said to reach an accuracy of 97%. Not so long after, NVIDIA launched Generative Adversarial Network (GAN), which is a class of machine learning designed to generate new data with the same inputs provided. The portraits created by GAN is so realistic that a human eye can be fooled into thinking it as a real snapshot of a person. GAN has seen widespread usage in the creation of celebrity faces. Google’s popular doodles are an outcome of the GAN system.
The advent and rise of AI, however, has generated quite of bit of negative speculations as well, owing to recent developments in the said field. Some key concerns are as follows:
- In 2016, Hong-Kong based Hanson Robotics introduced Sophia to the world. Sophia is a humanoid robot adept in social skills that can strike a conversation, answer questions and display more than 60 facial expressions. As much as it looked futuristic, the eeriness of the entire scenario did strike a discomfort among the masses. After all, machines being humans is something people are not accustomed to. The increasing use of robots and robotic science in the manufacturing industry is striking a rather uncomfortable nerve worldwide, as it comes with the replacement of the human workforce.
- It has been noticed that only a handful of industries gain immense help from AI. This has mostly been the IT sector and specific manufacturing industries. As a result, not every party is not willing to invest in AI technology and it remains to be seen how the situation unfolds in such a scenario.
- The last two decades witnessed a blossoming of interest and investments in AI. The emergence of AI algorithms, coupled with massive amounts of data and its ability to bend/manipulate them, is one of the most significant factors that artificial intelligence has reached where it is today. The development of deep learning is another for resurgence out of AI winter. However, with all the investments, interest, and funding, can AI live up to its hype, or is it heading towards another AI winter due to over-exaggeration, overpromising, and seemingly under-delivery of it said capabilities. It remains to be seen.
While there are certainly lots of speculations for AI, we expect that the next AI winter would not come. Another AI winter is possible if we repeat the past circumstances. As for now, AI is becoming a part of our daily lives. It is in our cars, phones, and other technologies we use on a day-to-day basis. It is common to interact with AI regularly, whether it is a helping chatbot, personalized ad or better movie show/TV suggestions. AI is too much integrated into our lives and only time will tell where it heads.
In any software development process, the methodology involved is more or less the same. The most generic requirements are developers, preferred programming language, testers and carefully planned set of actions to perform. The same can be applied to development of CAD software as well.
Having CAD software that can actually meet product development needs is an obvious necessity. Although, there is a lot of common ground between a CAD software development project and a regular software development project, there are criteria very specific to CAD software development projects which needs to be addressed.
Let us take a walkthrough:
Acceptance CriteriaAcceptance criteria are a list that mentions user requirements and product scenarios, one by one. Acceptance criteria explain conditions under which the user requirements are desired, thus getting rid of any uncertainty of the client’s expectations and misunderstandings. However, defining acceptance criteria is not simple, but has its complications. Also, it is not convenient to expect a 100% pass rate. In such case, an ideal way is to have a set of test cases with defined input and output data.
Algorithmic ComplexitiesTo successfully develop a complex product, two critical questions must be answered, how to develop the right product and how to develop the product right.Unlike some of the other problems like interest rate calculations or workflow management system, there is not a defined set of steps that results in the final answer. There are often multiple algorithms for a single issue and the situation becomes more complicated when a particular algorithm, deemed to be perfect for a given situation, may not perform that well in a different scenario, which often leads to trade offs.
TolerancesTolerance is one of the factors to evaluate product quality and cost. It has a significant role. As tight tolerance assignment ensures design requirements in terms of function and quality, it also induces more requirements in manufacturing, inspection and service that results in higher production cost. Most of CAD data works on variables that are doubles and floats and Floating point precision, Tolerance plays a very important role in the algorithms. When using data from other systems say STEP file from other source, if there is a mismatch in the tolerance, the destination system can cause lot of issues.
Risk of RegressionAdding a new functionality or improving an algorithm always has a risk of impacting the test cases that were working before the fixes. One should always develop a robust test suite for catching regressions while carrying out testing. To create a regression test case suite one must have thorough application knowledge or complete understanding of the product flow.
InteroperabilityThe quick emergence of varied CAD software has led designers to democratize, leading to the usage of multiple CAD systems in the design process, thus challenging the CAD interoperability aggressively. Different suppliers require different CAD platforms. It depends on many factors, primarily the nature of the task and product upon which it has to work. Merging different CAD data together without affecting the design intent is quite the hassle. Although, a lot of software these days support different CAD files, there are instances, where the particulars of a project has made the product confined to that one CAD software. Interoperability eases up extra work and whether to make your own software compatible with other, is a decision that should be seriously taken into account.
5 Factors to consider while choosing a CAD platform
Choosing a CAD platform can be a very difficult decision for any organization. Depending on the size of the organization it could be a very crucial decision because it is “sticky decision” and can not be changed easily in the future. So the decision should be taken considering a variety of factors. This document discusses some of these factors in detail.
In a recent survey conducted by an eminent website, 230 product development professionals were asked questions regarding their level of satisfaction with CAD software. The survey intended to gather individual experiences of CAD users, turn them into informative insights and churn out common issues faced by design teams et al. The common issues were grouped under specific causes and in a total, four major issues were drawn out.
1. Suitability of CAD software
The most important aspect perhaps is the suitability of the CAD system for a particular organization. It is always a good idea to list out all the workflows, representative parts, any special processes, etc. Then a benchmark study should be conducted to assess the suitability of different CAD software against the checklist. One can even rate different software on each of the parameters.
2. Software Ecosystem
This is an external factor but an important one. Suppose you need to work with a lot of vendors then one has to consider that aspect. Would my vendor be able to provide me data in my format? On the other hand, your customers may force you to provide data in a specific CAD format.
The availability of trained resources is also an important consideration to ensure that you can attract and retain talent for your business needs.
3. The learning curve for CAD software
When a new CAD package is introduced, the amount of time taken for the users to learn the new features is critical to how precisely and quickly design teams can bring their ideas to life.
Although companies provide specific, coherent, and comprehensive training regarding new CAD software, it's not enough of course, as the user also has to familiarize themselves with the interface of the new CAD package. Needless to say, the time consumed in this regard has caused quite a bit of inconvenience.
Importing and exporting files correctly shouldn’t be a hassle in general. However, this issue stood in the second place as CAD users found importing/exporting or interoperability quite the headache. The primary problem that CAD users face while importing and exporting files, is that the 3D model loses features—if it has no parameters, it has no intelligence—and therefore it is no longer parametric. Sometimes the object is incomplete or just a partial translation, which means the surfaces are missing.
A lot of CAD packages do not provide interoperability, which is the ability to transform from other native CAD formats. It presents the user with a situation where he has to convert it into a STEP file or some other neutral format. This puts the design engineers with contrasting data from various sources which might return complicated product definitions.
5. Cost of Ownership
The cost of ownership is a big deal among users. Whether it is about the actual cost that concerns users or the fact that they don’t perceive sufficient value, the cost of ownership has always remained a matter to frown for users. A possible cause might be unawareness in the case of the user about an important new functionality available in modern CAD systems that can massively enhance product development processes. Some of these features enable better ways of creating and managing documentation as well as useful tools such as generative design and simulation.
To better understand how leveraging new functionalities can offset the notion that CAD software costs are too high, we can weigh on one of the new features, simulation, which compliments the model design. The companies identifying design issues early on in the design cycle are actively using simulation in the said phase thereby integrating it into their design. Simulation aids in iterating the design and making varied choices much earlier in the phase rather than making that choice much later during the prototype phase.
In the bottom line, product development professionals want an affordable CAD system, and that provides value to their designs. Interoperability remains a major hindrance that seems quite unnecessary and outdated in this era. Design professionals want their CAD system to be familiar in interaction and easier to use and want that usability to translate into an easier search and hiring process.
At the starting phase of developing an application, the primary question usually comes like this—whether to make it an add-on application or a standalone application?
Before we get into the details, we need to understand what exactly these two terms mean in the computing world.
An add-on (also known as addon or plug-in) is a software application, which is added to an existing computer program to introduce specific features.
As per Microsoft’s style guide, the term add-on is supposed to represent the hardware features while add-ins should be used only for software utilities, although these guidelines are not really followed as terms are mixed up quite often. When a program supports add-ons, it usually means it supports customization. Web browsers have always supported the installation of different add-ons to suit the tastes and topics of different users by customizing the look and feel of that particular browser.
There are many reasons for introducing add-ons in computer applications. The primary reasons are:
- To introduce abilities to extend an application by enabling third-party developers to create variety of add-ons
- To support new features
- To reduce the size of an application
- To separate source code from an application because of incompatible software licenses
Usually, the host applications operate independently, which makes it possible for developers to add and update add-on features without impacting the host application itself. A host application doesn’t depend on add-ons but on the contrary, an add-on depends on the host application. Add-ons usually depend on the services provided by the host application and cannot operate by themselves.
A Standalone application is the type software that doesn’t comes bundled with other independent software features. In simple words, a standalone application does not require any separate software to operate.
A stand-alone application deploys services locally, uses the services, and terminates the services when they are no longer needed. If an application does not need to interact with any other applications, then it can be a stand-alone application with its own exclusive local service deployment. Services locally deployed by this application are not available to any other application.
A standalone application needs to be installed on every system which makes it hard to maintain. In the event of a system crash or a virus attack, when a system needs to be replaced or reinstalled, the application also needs to be reinstalled. The access to the application is limited only to the systems that have the application installed.
Standalone applications can never be kept online, and remote availability of data is practically impossible. However, there are situations where standalone application is the best choice. Here are a few:
- Text mode printing on pre-printed stationary, which browsers fail to do
- Where data security is very high and you don’t want the data to travel on the wire at all
- Design applications which need very high responsiveness and at the same time work on big data structures
- Printing on legacy continuous stationery
- No need of networking, application is needed only on a single system
- More hardware support like barcode printers, webcam, biometric devices, LED Panels, etc.
- More Operating System level operations like direct backup to external devices, mouse control, etc.
- Creation and manipulation of local files
As the development of software makes its progress, there comes a stage where it needs to be evaluated before concluding it as the final output. This phase is usually known as testing. Testing detects and pinpoints the bugs and errors in the software, which eventually leads to rectification measures. There are instances where the rectifications bring in new errors, thus sending it back to another round of testing, hence creating a repeating loop. This repeated testing of an already tested application to detect errors resulting from changes has a term — Regression Testing.
Regression testing is the selective retesting of an application to ensure that modifications carried out has not caused unintended effects in the previously working application.
In simple words, to ensure all the old functionalities are still running correctly with new changes.
This is a very common step in any software development process by testers. Regression testing is required in the following scenarios:
- If the code is modified owing to changes in requirements
- If a new functionality is added
- While rectifying errors
- While fixing performance related issues
Although, every software application requires regression testing, there are specific points that apply to different applications, based on their functioning and utility. Computer-Aided design or CAD software applications require specific points to keep in mind before undergoing regression testing.
Regression testing can be broadly classified into two categories, UI Testing and Functionality Testing. UI testing stands for User Interface which is basically testing an applications graphical interface. Numerous testing tools are available for carrying out UI testing. However, functional testing presents situation for us. This content focuses on the points to take care while carrying out functional regression testing.
Here are most effective points to consider for functional regression testing:
- It is important to know what exactly needs to be tested and the plans or procedures for the testing. Collect the information and test the critical things first.
- It is important to be aware of market demands for product development. Document or matrix should be prepared to link the product to the requirement and to the test cases. Matrices should be modified as per the changes in requirement.
- Include the test cases for functionalities which have undergone more and recent changes.
It’s difficult to keep writing (modifying) test cases, as the application keeps on getting updated often, which leads to some internal defects and changes into the code which in turn might break some already existing functionalities.
- It is preferred to run the functionality testing in the background mode (non-UI mode) because often it is faster and eliminates problems associated with display settings on different machines.
- One needs to lay down precise definitions of the output parameters that are of interest. Anything from the number of faces, surface area, volume, weight, centre of gravity, surface normal, curvature at a particular point etc. It is always a good idea to have a quantifiable output parameter that can be compared.
- It is often advisable to develop a utility to write the parameters that are of interest in an output file it could be text, CSV or xml file.
- Creating baseline versions of output data files is a good idea to visually see every part for which the baseline data is created.
- Developing automation script enables the entire test suite to run without any manual intervention and the results can be compared.
- Compare the output data generated with the baseline version, for every run of test case, for it is very important to keep in mind that if there are doubles or floats in the output data, tolerance plays a very important role.
- Some areas in the application are highly prone to errors; so much that they usually fail with even a minute change in code. It is advisable to keep a track of failing test cases and cover them in regression test suite.
Failure to address performance issues can hamper the functionality and success of your application, with unwelcome consequences for end users if your application doesn’t perform to expectations.