AI is only as smart as our influence - The EE

AI is only as smart as our influence

Joseph Zulick

The old term, garbage in equals garbage out is true even in the smartest of systems. If you incorrectly define data points or describe an outcome as being bad when it was good then the decision tree that Artificial Intelligence uses starts to fall apart.

It is unfair to claim that AI or sensor tracking failed because it didn’t meet our outcome, especially if we supplied the data and the parameters of success, says Joseph Zulick, writer and manager at MRO Electric and Supply.

In a recent “Destiny of Manufacturing” podcast, Danny Schaeffler, president of Engineering Quality Solutions, discussed that many companies are not ready for Artificial Intelligence because they are not done mastering the systems and data they have now.

Danny felt that many companies have not reached the potential of their current systems. Mr Schaeffler went on to question, “Are we prepared to compete in the Global market? Technology is pulling the planet closer together and we all have to examine our competitiveness.” Are you examining the total cost of ownership? We do not always take into account the price difference when we undertake a new job or task. Have we factored in transport costs? If international, duties and taxes and sometimes tariffs ( yes other countries charge tariffs).

Do we know the implementation costs? All of this information factors into our AI decisions and costs. Our current systems dictate the questions we will be asking for solutions; they are also the data points we will be using to provide those conclusions.

In many plants they have stopped using their data collection systems such as tonnage monitors and die protection. This means they lack the historical trends that are used for the AI to draw a conclusion. If you look at this simplistic array, you can see how data is critical to defining a trend and solution.

Is your company doing enough to meet the demands in the workplace? Mr. Schaeffler discussed an excellent point that as materials change and new products demand complex materials it’s not just the manufacturing or production departments that need to be educated.

The engineers need to research if the equipment is adequate for the new materials. Your AI is all but starting over if you can’t directly correlate the old material results and the new material expectations. The purchasing department needs to be looking at new material sources, it’s possible the supplier of simple mild steel is not going to be the best source for exotic materials.

On the quoting side, are you pricing yourself too low, all departments will need to be trained on new technology and materials. The sensors will need new calibration points as these changes occur. The resulting data will need to be compared with old data and is it useful?

Unfortunately when you take on new materials simulation data may have a standard linear curve of data but more like it won’t.

Along with the data that needs to be gathered to produce accurate results, we must also temper our expectations. If we think that a first run part with new tools, cutters, nozzles, etc along with new materials mixed with limited data will produce accurate initial data you are setting yourself up for disappointment.

Some of this is due to an extension of the curse of knowledge, once we know how something works we can’t unknow this information. Consequently we set too high of expectations and create unrealistic timelines. While AI and IoT can make life easier and more accurate it can not eliminate launch phase challenges until we can develop baseline data.

Our influence over AI can be felt in the computer simulations and in closed loop feedback where the confidence in the data can be misleading because the system has confidence in its calculation.

Let’s take an off the floor example. We have a programmable thermostat that turns the heat on when the temperature drops 2 degrees. The data that I monitor is the front door bell which trips when the button is pressed. I may assume that the visitor is the cause of the drop in temperature. I have the data which correlates this to be true when in reality it’s the door opening and closing.

Whether there is a visitor or not. You can have a high degree of confidence because you show a correlation but without adequate analysis of what the data means you may never reach the root cause. You may also have an inadequate number of sensors or sensing the wrong thing to generate what is needed in order to model this situation.

The old saying is everything that is measurable isn’t important and everything that is important isn’t measurable. This is often paraphrased and comes from Einstein’s quote of “Not everything that counts can be counted, and not everything that can be counted counts” I’m sure this was after a grad student asked about their grades.

We are the greatest influencer is the accuracy of our results. I don’t want to venture into intentionally manipulating data for personal gains but at a very minimum it has been viewed by systems like Linknet and other data collection systems that the formulas are often edited to produce oee numbers in the 80% when in reality they have been viewed in the 60%.

If you obey the true rules of systems oee it is machine availability 100% which starts with a base of 365 /24/ 7. If you choose not to run a third shift many companies will say they’re 85% efficient running 2 shifts when in reality this would be more like a maximum of 66%.

This is where data gets tricky and AI is only as good as what we choose to provide. Many people will say efficiency is a rating of how well you do your process so 2 shifts are a maximum of 100%. The difficulty is you may never go to fill that pipeline for the third shift if you believe you’re maxed out at 90% OEE.

This is just one example of how data can get muddy if you allow it to become that way.

The other problem we have in AI is that when it comes to data, too often we start with the conclusion and work toward proving our point. This becomes a big problem when you are trying to sell the concept of big data to workers and operators when you have a history of using the data not to bring about improvement but instead to assign blame.

AI can provide predictive analysis based on current data and trend examination. These trends can provide us with that look over the horizon, clear of the forest and the trees. When 2 roads diverged in the woods, now you can find that path that takes you to the promised land and the one that leads you off the cliff.

If you provide the right balance so people can achieve the ultimate goal of improvement and innovation you have to set your goal for AI to provide these solutions. It’s more difficult than ever to compete in this global economy and in order to do so we need advancement and breakthroughs that will come from looking at our problems in a new way and allow AI to provide the path.

The author is Joseph Zulick, writer and manager at MRO Electric and Supply.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.