An automated warehouse in Hong Kong that runs 24/7 uses a swarm of robots driven by AI to help deliver groceries. Known as Autonomous Mobile Robots, or AMR, they operate on a tailored track laden with QR codes to track their movements. The data they collect aids in improving their efficiency over time. The more the robots work, the smarter they become.
AI has helped meet modern consumers’ demands for fast delivery. The current Covid-19 pandemic has increased markets for automated logistics. Big players in e-commerce like Amazon and Alibaba already have a horde of AI-powered robots relentlessly doing their bidding. These Robots and computerized systems running them are subsets of a much bigger field of study: Artificial Intelligence.
Automation is here to stay and thrive. There is no going back from a technology that is on a mission to transform how we interact with our daily tasks. Automation is everywhere, From warehouses to factories, from mobile phones to customer support, from cab services to transportation. You name a field, and Automation is already prevailing in it.
Tesla and SpaceX CEO Elon Musk claimed that AI will be smarter than humans and will overtake by 2025. Although it sounds a bit exaggerated, the rate at which AI and Automation are galloping towards the future, such predictions are entirely dismissible. However, Elon Musk also described AI as an existential threat. There have been growing concerns about AI taking over human jobs.
Is it a grave threat, or is it fear-mongering?
As per a leading consulting firm, one in three US employees will hand over jobs to Artificial Intelligence by 2030.
How Automation is affecting various industries
Automation is a derivative of great industrial revolutions that changed the production and commodity landscape. There are four industrial revolutions, the current one being the fourth industrial revolution, also known as Industry 4.0 (To read more about Industry 4.0, refer to Introduction to Industry 4.0).
Coming back to Automation and its effect on industries, it is safe to say that some sectors will be receiving a more significant impact than others. Let us have a quick look at such industries ready to embrace the automation juggernaut.
Manufacturing: Probably the biggest receiver of change when it comes to Automation, the manufacturing industry is a fast-evolving domain that needs rapid advancements in Automation. Intelligent machines and robots have been in use in this industry for a decade already. The need for Automation in manufacturing is to enable error-proof operation, consistent production, negligible downtime, fewer human factors, and constant pace. In a world where consumer demand is growing, one must be super-efficient to meet those demands by supplying products to the market continuously.
Transportation: Transportation is one of the first industries to be affected by the automation wave. Airplanes have already been using autopilots for decades. Self-driving cars are being increasingly tested and deployed on the road. Couple that with the Internet-of-Things (IoT), and we have a robust system of intelligent vehicles.
Agriculture: With the world population touching 8 billion by the end of this decade, there is a dire necessity of producing the optimum amount of food to feed the people. As a result, the agricultural sector needs increased attention regarding automating food production, distribution, and supply.
Logistics: As mentioned earlier in this blog, top companies like Amazon and Alibaba have upgraded logistics at the consumer level by employing robots, placing AI technologies to manage warehouses and delivery departments.
Healthcare and Pharmaceuticals: With the advent of nanotechnology, robotics, and IoT, the healthcare and pharma sector has climbed the ladder and introduced some groundbreaking medical treatments. The field of gene research and genetic altering system employs nanobots to carry out tasks.
Customer Relations: Remember when you enter a website, and a pop-up generates, eager to lend you support? Or how about when you have a complaint, and you interact with a customer care executive? Well, they are most likely chatbots with curated responses to address your queries and grievances. Many retail outlets in advanced nations are adopting cashier-less automated transaction desks.
Automation will end with repetitive work, and it has started shaping future jobs. It is likely that soon a lot of the current jobs will no longer exist. It is even predicted that jobs like plumbers, car mechanics, barbers, and funeral directors are likely to be replaced by automated appliances, robots, and computers.
Will Automation Take Over Jobs, Or Will It Improve Them?
As seen from the thriving tech sector, there is no immediate threat to jobs with AI, but a more radical use of technology could destroy employment opportunities for millions. Automation has been around since the late 1800s, but with the rise of the digital revolution, we see it gain momentum and be applied to a wide range of sectors and services. We are already witnessing Automation and the use of robots taking over repetitive and mundane processes like manufacturing and sending information to factory floors. In the transport sector, most of the workforce is being replaced by technology. The financial industry has also begun losing jobs to computers as these can perform most of the jobs that have to be done.
Eventually, it may result in a world full of unemployed people and loads of robots and intelligent systems. Yes, all those possibilities could turn out to be true. There are big movie franchises that show why this is not a good idea.
According to a 2013 study on the probability of automated jobs predicted that bank workers, transportation and logistics workers, and clerical and administrative workers - many middle-class jobs - were at risk of being replaced by technology.
But is that fear-mongering genuine?
While Automation will indeed displace many jobs over the next 10 to 15 years, it won’t eliminate human employees at all but rather modify the job landscape by introducing new work opportunities. Rather than eliminating the drudgery of repetitive tasks, Automation will place people in control of an entirely different set of operations. As a result, future jobs will require a different set of skills and educational requirements. By eliminating machines, most experts believe that Automation will create an enormous number of jobs.
The World Economic Forum estimates that automation will lead to a net increase of 5.8 million jobs.
An Investment Management firm predicts that Automation will boost US GDP by five percent to $1.2 trillion over the next five years. Contrary to widespread fears of job losses, the World Economic Forum predicts a net gain of 5.8 million jobs through Automation. Two-thirds of the jobs that will be transformed by Automation will be more skilled, while the other third will be less skilled.
Fears that machines will put large parts of people out of work are exaggerated. Today’s European workers are facing a degree of change as their jobs evolve with technology. Market analysts suggest that more than 80 million European employees - about 50% of the total workforce - will have to learn significant new skills and upgrade themselves for their current jobs over the next decade. In megacities such as London and Paris, employment opportunities are concentrated where few residents are qualified to fill them. This, in turn, is a situation where labor-saving tools can lead to more work for people, sometimes without interference, sometimes due to new technological requirements.
Studies claim that most jobs will modify rather than disappear entirely. Many jobs will continue to exist, with a healthy number of automated tasks. While we have seen a decline in automatic manual work and routine tasks, other skills are generally considered safe: cognitive skills such as critical thinking and socio-behavioral skills such as recognizing and managing emotions and improving teamwork. Robots are not so easy to replicate today. Competitive advantages can be summarized as Automation in selected countries, where companies can protect and increase jobs.
World Bank's 2019 World Development Report dismisses the speculations about automation displacing jobs.
Data entry and office jobs are likely to decline, as computers can instantly load files and sort information. On the other hand, occupational therapists who treat, support, and evaluate people in the workplace are skills that robots are unlikely to replicate.
To be productive in the future, many experts suggest that humans and robots must work side by side. Robots will need jobs that can be automated, but humans will need jobs that require a personal and creative touch.
Companies can make a working synergy between employees and Automation. A good example is a renowned robotics company that uses Cobots - collaborative robots that companies can design to make Automation easier for human employees to use. The company has developed an online course to allow workers with no technical background to program a robot in 87 minutes. As a result, human workers can create automated robots for specific tasks, which will give back large amounts of valuable data.
The book “The Sentient Machine” is more optimistic about the impact of AI on society than books like “The Rise of the Robots,” which is more a cautionary tale, raising concerns about robot automation and AI taking over jobs only by workers but also by employees. Those who have a high probability of losing their jobs through artificial intelligence should not panic.
The Future of Automation
There is a tipping point that Automation is coming, and the impact on jobs will be determined by which country adapts most rapidly and effectively. Employers can expect to rely on computers for jobs that people would typically do. Computers are more error-free, and in some areas, more competent than human workers, so intelligent computers and their robot associates will be the future of work.
One thing is for sure. Countries highly dependent on industries like agriculture, textiles, food, and cars, are expected to be the worst hit. However, a study by The World Economic Forum predicts that even developed economies will see jobs lost to technology in the next five years. But what’s surprising is that the impact on employment is also likely to be far less intense in countries where we’d least expect it. Just before a global conference on the future of work, CNBC decided to explore how technology is transforming the world of work and the effect on both blue and white-collar workers.
The long-term impact of AI and other automation technologies on the labor market is uncertain as of now. It is recognized that many jobs will be affected, but it is difficult to predict precisely which positions in which sectors are at risk. No one knows how things will unfold in the future, so the best bet is to study the Automation & AI market, upgrade accordingly, and stay prepared.
The emergence of Artificial Intelligence in recent years has shifted the dynamic of technology’s interaction and implementation in a way seldom seen. The fact that machines can think, analyze and operate like humans have been raising eyebrows since its inception. Artificial Intelligence, Machine Learning is deemed to be the most sought after field and career option in the coming decades.
But what is so captivating about AI ML? What is it that sparked fears of AI ML taking over human labor?
Inarguably, it is the essential attribute of AI – Intelligence and Learning.
Humans are the only species in the world whose intelligent quotient surpasses any other species by a wide margin. Whether due to evolution or otherworldly miracles, it is safe to say that humans stand at the top of the food chain and dominate the ecosystem like no other. This dominance can be credited to human intelligence.
Although there is no definite description for intelligence, one of the greatest scientists of the 21st century, Stephen Hawking famously quoted, “Intelligence is the ability to adapt to change.”
Intelligence can be thought of as an ability to acquire and apply knowledge or skills. The broad spectrum of intelligence covers abilities like understanding, logic, self-awareness, emotional experience, reasoning, planning, critical thinking and problem-solving. Although some of these abilities are found in every other animal species out there, humans surpass them by a long shot.
So what is it that humans do differently and better than others. Here are a few examples:
Humans are capable of gathering information about a phenomenon from multiple sources. Such a varied perspective makes it easy for humans to consolidate data from different standpoints and form a solid knowledge foundation.
Collecting and consolidating information lets humans correlate all the data and bind them together to form a patchwork thus giving shape to the knowledge base.
A fundamental characteristic is the ability to make decisions with limited data or a partial understanding of the system. A human mind can process a piece of information from several standpoints and draw out the best conclusion, something never witnessed in any other species.
Learning is an essential factor in the field of evolution. Without learning capability, humans would not make it this far. Most of the animal species have a distinct learning curve, which helped them overcome adversities and evolve accordingly.
Learning is a process that causes “change” as a result of acquiring new or modifying existing knowledge, behaviors, skills, values, or preferences.
Learning is very much intertwined with intelligence. Learning is an application of intelligence itself. Putting it simply, intelligence is the stirs the pot while learning is the taste of it and understanding what needs to be done. Here is how intelligence and learning interact with each other:
Learning encompasses the following methodologies:
Working with only data and no/partial knowledge about a system
The next step leads to building a model with the limited amount of data
The last step includes drawing out conclusions from the model, analyze and identify the shortcomings and refine the model
Learning facilitates the prediction of data based on the understanding of the model and gathering actual observations. These observations are compared with the predicted outcomes to differentiate between the two sets.
Approach to AI ML through Intelligence and Learning
The conventional method of problem-solving through intelligence and learning covered a one-time pre-defined rigid model which, when running through the application, yields a definite result that couldn’t be processed any further. Such a method consists of strict one-way inputs which are mostly theoretical. This approach negates the learning process and presents a limited scope for refinement. The conventional means of that takes a lot of time and once completed, it can be considered done and dusted.
The AI ML approach, however, takes a more flexible route as it enables the extraction of information from time to time, understanding the essence of information, and refine or adjust a model as per the findings. Such a method consists of various environmental inputs that might vary from time to time to draw several conclusions. The results are extracted to pull the absolute intrinsic or indispensable quality of something, which determines its character. AI ML approach to problem-solving facilitates adjustment of the model in accordance with its character to further run it through application again.
Although AI ML has imitated human intelligence and learning capability to a reasonable extent, the game is far from over. Humans being complex creatures has a wide range of intelligence, namely Logical-mathematical, existential, interpersonal, linguistic, bodily-kinesthetic etc. Humans can also derive the meaning of cosmic entities far from the reach of an AI machine for now. It remains to be seen how far AI ML will go considering this is just the start.
Artificial Intelligence (AI), Machine Learning, and Deep Learning have been extensively used for more than a decade but largely remained confined to areas such as voice recognition, image reconstruction, image/signal processing, and output prediction.
Such algorithms have seen limited usage in engineering domains such as thermal management, electronics cooling industries, fluid dynamics prediction inside the engine or over a bonnet, aerodynamics, and fluid dynamics problems across an aero-foil or turbine engine.
The delicate relation between AI ML and engineering can be better explained with two specific terms – A priori knowledge and posteriori knowledge. Since the time of Emmanual Kent, western philosophy has defined A priori knowledge as something which is attained from reason and independent of particular experiences. On the contrary, posteriori knowledge is derived from real evidence that has to be considered authentic.
It means A prioriknowledge is not circumstance-centric but instead follows a set of pretty universal rules. Fundamental concepts of thermodynamics, electromagnetism, mechanical, and material properties are highly quantitative. They stick to a predetermined route rather than a vast stock of different scenarios.
Engineering Problems Requirements
Every problem related to engineering emphasizes the below-mentioned parameters:
High Accuracy Levels – Every endeavor starts with a model in the early stages. The model undergoes various physical applications and virtual simulations. It is done to gather all sorts of data to determine the proposed workability of the model and improvise areas. The model goes through several stages of scrutiny until high accuracy levels are achieved. AI ML works more on input feeding, and the outputs fluctuate every given time.
Function Over Feel – Engineering problems ask for the accurately intended functionality of a model. Feel of the component is never the priority. Every process applied to a model at every stage makes sure the intended functioning is obtained. As mentioned before, it is more linear. On the other hand, AI ML targets more on the feel, which varies with different situations.
High Repeatability and Predictability – An engineering task involves a high repetition of activities and the desired outcome is already known. One cannot simply predict an AI ML output, and as a result, for a conventional model in engineering, AI ML is not suitable.
However, recent years have witnessed increased usage of AI ML in the engineering sector, which is attributed to the following change in trends:
Keeping up with rapid advancements to address consumer needs, the speed of coming up with new ideas, design, versions have increased
Extensive field testing is not viable anymore
Over the years, lots of digital footprint data of earlier design and earlier products have been accumulated and is available to serve as feed for AI ML
Feel attribute is getting a lot of importance which results in the implantation of AI ML
The trend of customized design to suit specific requirements is getting more and more common
Application of AI ML in Engineering Problems
Although Artificial Intelligence has found its niche in the engineering sector, it is extensively found in four areas of operation, which have a massive importance in today’s market.
As the related data are available for every product released in the market, we have a readily available vast database to quickly conjure up past information and generate engineering data out of it. This makes the task more streamlined, so we can understand product requirements, highlight the recurrence of similar conditions in the past, and pull out past data that have previously catered to the same.
This minimizes the time required to draw out an elaborate plan from scratch. If a problem is repetitive, it can be solved with the help of past data. This helps to intend to multiple issues simultaneously.
Failure Analysis is the collection of data and analysis to obtain the cause of a failure.
Failure analysis is essential as it helps pinpoint the causes, reasons behind causes and pave a way to determine corrective actions or liabilities. A massive set of failure analysis records is fed to AI ML, which comes in handy during similar failures. AI ML can assess the loss and come back with valuable information, should the incident occurred in the past. Once again, it reduces detailed investigation and time.
A significant aspect of AI, while digital twin has been around circa 2002, credit goes to the Internet of Things (IoT) for making it cost-effective to implement. It was named one of the top 10 technology trends for 2017, considering it is imperative to business.
The digital twin is a virtual, digital replica of a real-world entity or process.
The intelligent components are integrated into a physical element to gather data such as working conditions, position, and process changes. The compiled data is collected, synthesized, and integrated into a virtual model and AI algorithms. Such data assets can be created even before the physical model is built. Applying analytics into these virtual models can give back relevant insights about the real-world asset. The best part of the digital twin is that once the physical and the virtual models are integrated, the virtual model can sync with the actual model.
The digital inspection involves collecting information and analysis of products on production to ensure quality control.
The operation of digital inspection has gained considerable momentum in engineering, especially in the manufacturing sector. Unlike paper inspections which could have been laced with occasional errors, digital inspection minimizes or completely obliterates the chances of mistakes. AI ML has made its way to production and manufacturing, consequently providing automation that is faster, cost-effective, and superior to human involvement. AI-infused digital inspections build intelligent systems that perform quality checks down to the finest of details, leaving no stones unturned.
The rise of artificial intelligence has allowed automated machines to develop complicated manufacturing and design operations. AI has found significant importance in:
Areas where data is available (or could be generated) and forward model is hazy or too complex
Areas concerning partial data and incrementally growing data problems
Areas of operation with disparate sources of information and varied data
Forward engineering problems where constraints and inputs are not well defined/quantified
The end goal is to introduce machines capable of learning, exploring, probing, and improving without human intervention. AI ML and Big Data are climbing the ladders of engineering with pace. An interesting point to bring up is that in our pursuit of creating supreme AIs, we are unwrapping information about how human brains perceive & operate and how we address the learning process, both consciously and unconsciously.
In November 2014, E-commerce giant Amazon announced the launch of Alexa, a voice-controlled virtual assistant whose task is to transform words into action. It caught the attention of tech enthusiasts and the general populace alike. The inclusion of Samuel L. Jackson’s voice in Alexa was the talk of the tech town.
Recent years have witnessed a climactic change in the way technology interacts with humans. Alexa happens to be just that one card out of the deck. From Tesla’s cybertruck to internet giant Facebook’s EdgeRank and Google’s PageRank has called for both awe and a little bit of commotion within the tech community. The driving force behind such innovations can be put under a single umbrella term — Artificial Intelligence or AI.
Artificial intelligence (AI) can be defined as — the simulation of human intelligence in machines, especially computer systems and robotics. The machines are programmed to think and mimic human actions such as learning, identifying, and problem-solving.
Although AI has burst into the scene nowadays, the history of AI goes way before the term was first coined. It is safe to say that the principle is derived from the Automata theory and found references in many storybooks and novels. Early ideas about thinking machines emerged in the late 1940s to ’50s by the likes of Alan Turing or Von Neumann. Alan Turing famously created the imitation game, now called the Turing Test.
After initial enthusiasm and funding on machine intelligence until the early 1960s,entered a decade of silence. It was the period of reduced interest and funding on research and development of AI. This period of decline is known as ‘AI Winter.’ Commercial ventures and financial assistance dried up and AI was put on hibernation for the said period.
The late 1970s witnessed a renewed interest in AI. American machine learning pioneer Paul Werbos devised the process of training artificial neural networks through backpropagation of errors. In simple terms — Back Propagation is a learning algorithm for training multi-layer perceptrons, also known as Artificial Neural Networks.
The neural networks consist of a set of algorithms that loosely mimics a human brain. It means much like a human brain; it is designed to interpret sensory data, cluster raw inputs, and classify them accordingly.
1986 saw the backpropagation gaining widespread recognition through the efforts of David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams. In 1993, Wan became the first person to win the international pattern recognition contest with the help of the backpropagation process.
Since the emergence of computers and artificial intelligence, computer scientists have drawn parallels between these intelligent machines and human minds. The comparison reached a pinnacle when, in 1997, an information technology company, IBM, created a computer known as Deep Blue to participate in a chess match with renowned chess master Gary Kasparov. The match went on for several days and received massive media coverage. After a six-game match, Gary Kasparov secured a win, Deep Blue secured two wins and rest three draws. The highlight of the spectacle, however, was the ability of machines to push forward the boundaries and lay down a new benchmark for computers.
Deep Blue made an impact on computing in many different industries. It enabled computer scientists to explore and develop ways to design a computer to tackle complex human problems with the help of deep knowledge to analyze a higher number of possible outcomes.
The rise in popularity of social media with Facebook saw the implementation of AI/ML in a wide array of applications. One prominent characteristic was the use of DeepFace. As the name suggests, DeepFace is a deep learning facial recognition system designed to identify human faces in digital images. DeepFace was trained on four million images uploaded by Facebook users and is said to reach an accuracy of 97%. Not so long after, NVIDIA launched Generative Adversarial Network (GAN), which is a class of machine learning designed to generate new data with the same inputs provided. The portraits created by GAN is so realistic that a human eye can be fooled into thinking it as a real snapshot of a person. GAN has seen widespread usage in the creation of celebrity faces. Google’s popular doodles are an outcome of the GAN system.
The advent and rise of AI, however, has generated quite of bit of negative speculations as well, owing to recent developments in the said field. Some key concerns are as follows:
In 2016, Hong-Kong based Hanson Robotics introduced Sophia to the world. Sophia is a humanoid robot adept in social skills that can strike a conversation, answer questions and display more than 60 facial expressions. As much as it looked futuristic, the eeriness of the entire scenario did strike a discomfort among the masses. After all, machines being humans is something people are not accustomed to. The increasing use of robots and robotic science in the manufacturing industry is striking a rather uncomfortable nerve worldwide, as it comes with the replacement of the human workforce.
It has been noticed that only a handful of industries gain immense help from AI. This has mostly been the IT sector and specific manufacturing industries. As a result, not every party is not willing to invest in AI technology and it remains to be seen how the situation unfolds in such a scenario.
The last two decades witnessed a blossoming of interest and investments in AI. The emergence of AI algorithms, coupled with massive amounts of data and its ability to bend/manipulate them, is one of the most significant factors that artificial intelligence has reached where it is today. The development of deep learning is another for resurgence out of AI winter. However, with all the investments, interest, and funding, can AI live up to its hype, or is it heading towards another AI winter due to over-exaggeration, overpromising, and seemingly under-delivery of it said capabilities. It remains to be seen.
While there are certainly lots of speculations for AI, we expect that the next AI winter would not come. Another AI winter is possible if we repeat the past circumstances. As for now, AI is becoming a part of our daily lives. It is in our cars, phones, and other technologies we use on a day-to-day basis. It is common to interact with AI regularly, whether it is a helping chatbot, personalized ad or better movie show/TV suggestions. AI is too much integrated into our lives and only time will tell where it heads.
Outsourcing has remained an integral aspect of striking deals between engineering and design firms. While it has been growing at a solid pace each year, several companies have taken the route to insource a part of their formerly outsourced services portfolio.
Insourcing is the practice of assigning a task to an individual or group inside a company.The work that would have been contracted out is performed in house.
Insourcing is entirely opposed to outsourcing where the work is contracted outside. Insourcing encircles any work assigned to an individual, team, department or other groups within an organization. It is a task or function that a firm could also outsource to a vendor, being directed in-roads. It often involves getting specialists with relevant expertise to fill temporary needs or train existing professionals to execute tasks without the need to outsource the same. The group of professionals could either be direct employees of the organization or hired expertise from outside third party vendors.
A perfect example can be put in this way – a company based in India opens a plant in the United States and employs American workers to work on Indian products. From the Indian perspective, this is outsourcing, but from the American perspective, it is insourcing.
Causes of Insourcing
The leading reasons for insourcing include:
A management mandate to make changes in corporate sourcing strategy
To provide a remedy for a turbulent outsourcing relationship
To obtain the right mix of in-house and outsourced services based on current business goals
Mergers and acquisitions can also influence insourcing decisions. A decent post-acquisition integration plan should include a common sourcing strategy between the two companies, which may ask for the outsourcing of functions that are in-house at one company and the insourcing of a task that was previously outsourced at the other
Insourcing enables companies to have control over decision-making and the ability to move more quickly and precisely
Reasons to Insource
Boosting business agility
Transformation needs secure integration with the business
Knowledge is now available and increasingly democratized
Providing a platform to nurture talent
While executing an insourcing project can be achieved, it is essential to know that insourcing a service can be more complicated than outsourcing the same. The transition may require rebuilding services and leveraging capabilities from ground level that were once wholly owned by the service provider, which can turn out to be more complicated than expected.
Both insourcing and outsourcing are feasible ways of bringing in labor or specialty skills for a business without hiring permanent employees. When it comes to selecting between outsourcing and insourcing, several entrepreneurs cannot decide what is best for them. Before jumping on to the differences between these two business practices, we need to check the definition of the terms.
Insourcing is the practice of assigning a task or function to an individual or group inside a company.The work that would have been contracted out is performed in house.
Outsourcing is the act of assigning a task or function to a third party vendor instead of having it performed in-house.
Differences between Insourcing and Outsourcing
Insourcing helps to track the development process and puts control over the quality of the work, while in the case of outsourcing, it becomes difficult to trace the quality of work.
There is very minimal risk in insourcing, as one can have complete supervision over intellectual property (IP). In outsourcing, the entire task is in the hands of an outside third party. In case IP is leaked, it proves awful as investments on research, people and development work go in the vain and outside party claiming the idea as theirs.
Insourcing helps in evading intermediaries’ costs like fees and commissions. Insourcing also drives to point other cost exponentials such as incorporating and utilizing third-party vendors who offer value-based pricing.
As insourcing helps in keeping track of task and workforce, it runs hand in hand with development squad which in turn helps in keeping an eye on every move in the business, finding problems and resolving them. While in outsourcing, one lacks to track when the problem arises and how it’s fixed.
In outsourcing, there are possibilities of miscommunication as the outsourcer and outsource vendor are in different places. The information goes from head management, and then, they commune it to outsource provider’s managers, who will lastly convey it to employees. This arduous and lengthy process has a risk of miscommunications. However, insourcing cancels out such possibilities. The miscommunication aspect is reduced in insourcing, as there is direct communication with employees.
Outsourcing a project overseas might face a few issues due to the different time zone and cultural factors. A vendor might have varying physical outsources, various techniques, design, and engineering. There is a big chance of communication problem due to different time zone. In insourcing, the assigned team will easily decipher the requirements, design, and engineering to produce a product as per nativity.
Various projects require complete confidentiality of data, which cannot be outsourced to a third party vendor. In that case, it is feasible to bring in their resources over to the project location, keeping the confidentiality intact while introducing expertise.
Insourcing is more preferrable when the business requirement is for a limited time or temporary or involves little investment. Outsourcing weighs more when businesses need to cut costs while still in need of expert professionals.
Insourcing software development has turned out to be an effective way for tech firms to boost business (to learn more, refer to Insourcing – A Breakdown). It is the exact opposite of outsourcing with similar intents (to know more, refer to Insourcing vs. Outsourcing). But like any business strategy, preparation and execution is necessary and are crucial for a successful endeavor. Choosing an insourcing partner requires as much meticulous planning and careful observation as in the case of outsourcing. Following are the tips on choosing the right insourcing partner for your business.
Establish insourcing goals
This is the most critical step a company can take while choosing an insourcing partner. The scope of work, the billings, and the project requirements have to fall under the insourcing partner's capability. The responsibility of the partner is to maintain a high standard of quality.
The Right team size
Many companies overlook this consideration while looking into insourcing options, but it's one of the most crucial factors in completing an in-house project. Make sure the vendor partner has the right blend of expertise and number to cater to your requirements.
Find out if the vendor-supplied workforce has the right experience and expertise in delivering services similar to the one you plan for insourcing. This includes several projects executed, types of clients worked for, and function expertise for knowledge-intensive tasks. Assess the management team's experience and qualifications, project managers, and other team members of the vendor company. Before entering into a long-term or substantial contract, interacting with the proposed team members before the commitment ensures fitment between the requirement and the team chose to execute it.
This factor is also overlooked to a great extent. It is essential to make sure that the vendor partner has sufficient working capital and is financially secure. There have been cases where the insourced workforce is not paid correctly by their employers, which affects their productivity. It is a classic case of ergonomics.
Privacy and Confidentiality
Numerous projects emphasize the confidentiality factor. There might be instances where a task cannot be outsourced merely because of the sophisticated nature and business goals entangled with it. But then, lack of workforce and budget issues drive a company to go for the insourcing route. It helps in keeping the work in-house and private while supplying it with necessary measurements.
There might be a variety of other factors out there depending upon client preferences and conditions. Irrespective of the vendor one chooses, starting a pilot project with a small team is always feasible to assess the outcome's scope in the long run and scale-up with time seeing the vendor's fitment with the business objectives and culture.
When outsourcing projects or insourcing tasks, organizations face a very crucial question about billing. Working with outsourced development team means that there are a few elementals that need to be sorted out from the beginning. It is because each project is different in its regard and comes up with its own set of requirements.
When a customer signs a deal with a software development company, they sign a billing agreement. The pricing model used depends mainly on project requirements. Two popular billing models are — Fixed-price Contract and Time and Material Contract. Selecting the right contract agreement is a vital step when outsourcing software development. Consequences of a wrong choice may yield unexpected outcomes.
Each type of contract has its pros and cons; hence, choosing any one of them may be a complicated task. The option that is well suited for one project may not be the ideal for another one. This article emphasizes on the advantages and disadvantages of these pricing models and explains which is better in what condition.
Fixed Price Contract
The fixed-price agreement is a type of contract where the service provider is accountable for completion of the project within the agreed sum in the contract.
In a Fixed Price model, the total budget on the project is set before development starts and remains unchanged. Plus, the exact deadline must be approved before the development starts. The contractor will bear the risks for late execution of works.
It is a practical choice in those cases, where requirements, specifications, and rates are highly predictable. The client should be able to lay down his clear vision of the project with the contractor to ensure appropriate final results.
When to use a fixed price contract:
Clear requirements and deadlines
Limited or fixed budget
Limited project scope
Fixed Price advantages
Usually requires clear deadlines and figures to be set to the budget. Planning expenses 1 to 3 months provide accurate statistics.
Regular project management communication with the contractor ensures scope compliance and eliminates the possibility of surprises.
Payments to the service provider count on the percentage of work performed. There is little involvement in such workflows since expectations are transparent and preset.
Time and Material Contract
Time and material (T&M) contract is the type of contract where the contractor is charged for the number of hours spent on a specific project, plus costs of materials.
Time and material contracts are much different from Fixed-Price because they involve billing clients for what they get. A time and material contract charges clients based on an hourly rate for all labor, along with the costs of materials. This type of arrangement might present some risk to the budget, but factors such as flexibility and opportunity to adjust requirements, shift directions and replace features prove to be very beneficial nonetheless.
In this model, the customer has a more significant role in the development of the software solution and bears all risks related to the project. The length of responsibilities that the client carries through the whole development process with time & materials is much higher than with fixed-price projects.
When to use T&M price contract:
Full project scope not established
Flexibility to modify the range with varying requirements
Time and Material advantages
T&M contracts allow businesses to modify the scope of work, revise materials or designs, shift the focus or change features according to project requirements.
There can be an established general goal that can be achieved, however knowing how it’ll be achieved is not so important beforehand.
Opting for T&M contract process helps to save time and start projects immediately.
Processing geometric inputs play a crucial role in the product development cycle. Ever since the introduction of complex algorithm libraries, the NPD landscape has changed drastically, and for good. Typically, a well suitable library streamlines the work process by executing complicated tasks using a wide array of functions.
An algorithm library basically works on the principle where it is fed with specific instructions to execute in a way with functionalities customised with it. For example, in manufacturing industry; there is a term known as point cloud library and it holds its expertise in converting millions of point cloud data into mesh models.
There are particular algorithms to perform numerous perplexing tasks. There are platforms that use specific and unique functionalities and programming to get the job done. Manufacturing requirements, end product objectives lay down the necessities for choosing a particular algorithm library. This article sheds a light on 6 key factors to consider while selecting any algorithm library.
Once data has been fed and stored, methods for compressing this kind of data become highly interesting. The different algorithm libraries come up with their own set of functionalities. Ideally, functionalities are best when developed by in-house development team, to suit up in accordance with design objectives. It is a good practice to develop functionalities to address complex operations as well as simple tasks. It is also essential to develop functions which might be of need down the line. In the end, one’s objective defines what functionality laced algorithm library will be in use.
Data Size and Performance
A huge data can be challenging to handle and share between project partners. A large data is directly proportional to a large processing time. All the investments in hardware and quality connections will be of little use if one is using poor performing library. An algorithm library that allows for the process of multiple scans simultaneously has to be the primary preference. One should also have a good definition of the performance expectations from the library, depending on your application whether real time or batch mode.
Libraries that automate manual processes often emphasize on processing speed, delivering improvements to either the processing or modeling. This allows for faster innovation and often better, yet singular, products. As witnessed in the case of point cloud, the ability to generate scan trees after a dataset has been processed greatly improves efficiency. A system will smooth interface that permits fast execution, greatly reduces the effort and time taken to handle large datasets.
Make versus Buy
This situation drops in at the starting phases of processing. Let us take an example of point cloud libraries. Some of the big brands producing point cloud processing libraries are Autodesk, Bentley, Trimble, and Faro. However, most of these systems arrive as packages with 3D modelling, thereby driving up costs. If such is the case, it is advisable to form an in-house point cloud library that suits the necessities. Nowadays, many open source platforms give out PCL to get the job done which has proven to be quite beneficial.
The commercial aspect also plays a vital role in while choosing an algorithmic library. Whether to opt for single or recurring payment depends upon the volume and nature of the project.
There are different models to choose from, if one decides to go with licensing a commercial library:
A: Single payment: no per license fees, and an optional AMC
B: Subscription Based: Annual subscription, without per license fees
C: Hybrid: A certain down payment and per license revenue sharing
Whatever option you select, make sure there is a clause in the legal agreement that caps the increase in the charges to a reasonable limit.
Storage, Platforms and Support
Storage has become less of an issue than what it was even a decade ago. Desktops and laptops with more than a terabyte of capacity are all over the market. Not every algorithm library requires heavy graphics. Investing in a quality graphics card is only important if your preferred library demands heavy graphic usage. That doesn’t mean investing in cheap hardware and storage systems available. A quality processor with lot of RAM is decent if the processing task is CPU and memory intensive. Another point to look into, is the type of platform or interface to be exact, the algorithm library supports. Varied requirements call for varied platforms such as Microsoft, Mac, and Linux. The usage, and licensing should be taken into account before selecting an interface.
Last but not the least, it is to mention that the inputs from customers are highly significant and there has to be a robust support system to address any grievance from the customer side. Having a trained support staff or a customised automated support system must be given high priority.
At the starting phase of developing an application, the primary question usually comes like this—whether to make it an add-on application or a standalone application?
Before we get into the details, we need to understand what exactly these two terms mean in the computing world.
An add-on (also known as addon or plug-in) is a software application, which is added to an existing computer program to introduce specific features.
As per Microsoft’s style guide, the term add-on is supposed to represent the hardware features while add-ins should be used only for software utilities, although these guidelines are not really followed as terms are mixed up quite often. When a program supports add-ons, it usually means it supports customization. Web browsers have always supported the installation of different add-ons to suit the tastes and topics of different users by customizing the look and feel of that particular browser.
There are many reasons for introducing add-ons in computer applications. The primary reasons are:
To introduce abilities to extend an application by enabling third-party developers to create variety of add-ons
To support new features
To reduce the size of an application
To separate source code from an application because of incompatible software licenses
Usually, the host applications operate independently, which makes it possible for developers to add and update add-on features without impacting the host application itself. A host application doesn’t depend on add-ons but on the contrary, an add-on depends on the host application. Add-ons usually depend on the services provided by the host application and cannot operate by themselves.
A Standalone application is the type software that doesn’t comes bundled with other independent software features. In simple words, a standalone application does not require any separate software to operate.
A stand-alone application deploys services locally, uses the services, and terminates the services when they are no longer needed. If an application does not need to interact with any other applications, then it can be a stand-alone application with its own exclusive local service deployment. Services locally deployed by this application are not available to any other application.
A standalone application needs to be installed on every system which makes it hard to maintain. In the event of a system crash or a virus attack, when a system needs to be replaced or reinstalled, the application also needs to be reinstalled. The access to the application is limited only to the systems that have the application installed.
Standalone applications can never be kept online, and remote availability of data is practically impossible. However, there are situations where standalone application is the best choice. Here are a few:
Text mode printing on pre-printed stationary, which browsers fail to do
Where data security is very high and you don’t want the data to travel on the wire at all
Design applications which need very high responsiveness and at the same time work on big data structures
Printing on legacy continuous stationery
No need of networking, application is needed only on a single system
More hardware support like barcode printers, webcam, biometric devices, LED Panels, etc.
More Operating System level operations like direct backup to external devices, mouse control, etc.
As the development of software makes its progress, there comes a stage where it needs to be evaluated before concluding it as the final output. This phase is usually known as testing. Testing detects and pinpoints the bugs and errors in the software, which eventually leads to rectification measures. There are instances where the rectifications bring in new errors, thus sending it back to another round of testing, hence creating a repeating loop. This repeated testing of an already tested application to detect errors resulting from changes has a term — Regression Testing.
Regression testing is the selective retesting of an application to ensure that modifications carried out has not caused unintended effects in the previously working application.
In simple words, to ensure all the old functionalities are still running correctly with new changes.
This is a very common step in any software development process by testers. Regression testing is required in the following scenarios:
If the code is modified owing to changes in requirements
If a new functionality is added
While rectifying errors
While fixing performance related issues
Although, every software application requires regression testing, there are specific points that apply to different applications, based on their functioning and utility. Computer-Aided design or CAD software applications require specific points to keep in mind before undergoing regression testing.
Regression testing can be broadly classified into two categories, UI Testing and Functionality Testing. UI testing stands for User Interface which is basically testing an applications graphical interface. Numerous testing tools are available for carrying out UI testing. However, functional testing presents situation for us. This content focuses on the points to take care while carrying out functional regression testing.
Here are most effective points to consider for functional regression testing:
It is important to know what exactly needs to be tested and the plans or procedures for the testing. Collect the information and test the critical things first.
It is important to be aware of market demands for product development. Document or matrix should be prepared to link the product to the requirement and to the test cases. Matrices should be modified as per the changes in requirement.
Include the test cases for functionalities which have undergone more and recent changes. It’s difficult to keep writing (modifying) test cases, as the application keeps on getting updated often, which leads to some internal defects and changes into the code which in turn might break some already existing functionalities.
It is preferred to run the functionality testing in the background mode (non-UI mode) because often it is faster and eliminates problems associated with display settings on different machines.
One needs to lay down precise definitions of the output parameters that are of interest. Anything from the number of faces, surface area, volume, weight, centre of gravity, surface normal, curvature at a particular point etc. It is always a good idea to have a quantifiable output parameter that can be compared.
It is often advisable to develop a utility to write the parameters that are of interest in an output file it could be text, CSV or xml file.
Creating baseline versions of output data files is a good idea to visually see every part for which the baseline data is created.
Developing automation script enables the entire test suite to run without any manual intervention and the results can be compared.
Compare the output data generated with the baseline version, for every run of test case, for it is very important to keep in mind that if there are doubles or floats in the output data, tolerance plays a very important role.
Some areas in the application are highly prone to errors; so much that they usually fail with even a minute change in code. It is advisable to keep a track of failing test cases and cover them in regression test suite.
Failure to address performance issues can hamper the functionality and success of your application, with unwelcome consequences for end users if your application doesn’t perform to expectations.
In any software development process, the methodology involved is more or less the same. The most generic requirements are developers, preferred programming language, testers and carefully planned set of actions to perform. The same can be applied to development of CAD software as well.
Having CAD software that can actually meet product development needs is an obvious necessity. Although, there is a lot of common ground between a CAD software development project and a regular software development project, there are criteria very specific to CAD software development projects which needs to be addressed.
Let us take a walkthrough:
Acceptance criteria are a list that mentions user requirements and product scenarios, one by one. Acceptance criteria explain conditions under which the user requirements are desired, thus getting rid of any uncertainty of the client’s expectations and misunderstandings. However, defining acceptance criteria is not simple, but has its complications. Also, it is not convenient to expect a 100% pass rate. In such case, an ideal way is to have a set of test cases with defined input and output data.
To successfully develop a complex product, two critical questions must be answered, how to develop the right product and how to develop the product right.Unlike some of the other problems like interest rate calculations or workflow management system, there is not a defined set of steps that results in the final answer. There are often multiple algorithms for a single issue and the situation becomes more complicated when a particular algorithm, deemed to be perfect for a given situation, may not perform that well in a different scenario, which often leads to trade offs.
Tolerance is one of the factors to evaluate product quality and cost. It has a significant role. As tight tolerance assignment ensures design requirements in terms of function and quality, it also induces more requirements in manufacturing, inspection and service that results in higher production cost. Most of CAD data works on variables that are doubles and floats and Floating point precision, Tolerance plays a very important role in the algorithms. When using data from other systems say STEP file from other source, if there is a mismatch in the tolerance, the destination system can cause lot of issues.
Risk of Regression
Adding a new functionality or improving an algorithm always has a risk of impacting the test cases that were working before the fixes. One should always develop a robust test suite for catching regressions while carrying out testing. To create a regression test case suite one must have thorough application knowledge or complete understanding of the product flow.
The quick emergence of varied CAD software has led designers to democratize, leading to the usage of multiple CAD systems in the design process, thus challenging the CAD interoperability aggressively. Different suppliers require different CAD platforms. It depends on many factors, primarily the nature of the task and product upon which it has to work. Merging different CAD data together without affecting the design intent is quite the hassle. Although, a lot of software these days support different CAD files, there are instances, where the particulars of a project has made the product confined to that one CAD software. Interoperability eases up extra work and whether to make your own software compatible with other, is a decision that should be seriously taken into account.