post

Guidelines in Making Data Quality for Process Control Systems

With the new technological developments in analytical instrumentation, the process plants today are now increasing in the usage of analyzers to help them improve operational efficiencies in process control systems. The important part of using this is it can contribute advancements in process control system in the way it understands and trust data that are being received from the analyzer.

The information received from a process analyzer provides a data hierarchy. The analytical device that can have one or more sub-controllers. These sub-controller learns to manage on or more streams. Each stream that is being analyzed is designed to get and achieve one or more measured component values.

The data hierarchy with explicit information gives meaning to the association between two sets of data. For example, if 1 stream is disabled it is then immediately obvious which are the components that will be affected. In a similar way, once the sub-controller is turned off the data hierarchy defines the streams that can be possibly be affected.

Engineers that understands the different data quality starts with the communication happening between the analyzer and the process control system. Once there are no major communication faults that will happen, engineers should look and understand the analyzer status and see if the analyzer is running properly if there is a possible error that will occur. In order for the stream 1 component to be validated, the analyzer should be quick in running the system with no faults. With this in mind, the stream 1 has should be set to online status. The other series of questions engineers should be asked are:

What will happen to the component values in the course of maintenance cycle? And;

Is the process control system should have at least the last value while the analyzer validation cycle is currently being performed?

To define the calibration cycle, it is the first and introductory step in changing the analytical results shown in a gas chromatograph. The other option is to set standard values to a not-a-number (NaN) in the course of doing a calibration. This is a crucial step to initiate re-initialising of the control algorithms when they return to their normal operation. The data quality in this logic should also be checked for a timeout.

Engineers can check the analyzer data quality in two steps. The first steps involve verification of the communications across all the analytical equipment. This is required to have and come up with results that are normally operating. Engineers should have clear understanding of the different analyzer modes, the fault signals, and error conditions.

The second step is designed to determine if the logic that verifies the data should come up with reasonable criteria. Once the instrument is able to detect concentrations in the outside of some process limits. The logic should be verified once the data is within a range that helps the engineers to come up with a sense of this process. There should be limitations (the physical ones) on how fast phased a value can change. In this case, a rate of change in the alarm system can be implemented.

The Different Advantages and Disadvantages of Auto Tuning Controls

The three historic challenges in tuning such as: single-loop, limited success of auto-tuning, modern difficulties of model-based control – all share similar root cause. The control engineering website published a two part series on the problems of auto tuning and its nature & definitions: Part 1 in the June 2018 issue and the advantages and disadvantages of autotuning control. Part 2 in August 2018 issue. Both of these articles are very informative read and these articles can make the accurate conclusions. However they miss one of the most important main implicaitons. There is one story about auto-tuning that reveals a very valuable lesson. 

The articles somehow conclude that auto-tuning still has no panacea. Engineers rightfully suspect this. Engineers regard this as the most important problem or challenge that comes with this are the unpredictable and nonlinear process. The former is the one wherein the actual process response tends to differentiate from the pre-identified response – this is where the tuning or model I snow based. In return, it shows that this tends to be accurate for most of the processes. This is the very reason why autotuning only got limited success regardless of the number of industry attempts. In which the actual process response differentiates poses for a fundamental conundrum for tuning and modeling. 

In addition to this, it helps to understand to explain the reason why single-loop tuning and multivariable control modeling most likely to recur maintenance in practical applications. In theory, they should be one-time engineering tasks. In this case, this is the long-held reality of loop tuning and now, it has emerged as the reality of the model-based control too. 

There are two popular solutions to these problems. But these solutions do not guarantee absolute answers to these problems. One solution idea deals with average model or average tuning. Most engineers regard this as the best way to deal with this problem, however it did not solve the problem engineers have today. The second idea for solution is the autotuning or adaptive modeling. This process has a potential to bring more problematic solutions than averaging, because the basic tuning for today may not be applicable or appropriate tomorrow. 

In the vernacular sense, the process gains are subject to change. Majority – if not all – gains can be frequently or dynamically changed because of the presence of everyday disturbances in some process conditions. The returning and remodeling remain as the commonplace as they do. Adding to the limited success of autotuning, engineers testify to this. It is a common sense that people spent years doing troubleshooting process to control its performance. However, autotuning cannot solve this recurring problem. The users should take a look at the existence of the emergence of adaptive modelling, which in case attempts to do the very same thing on a larger scale – now with a critical eye.