The Different Proposed and Accepted Revisions Concerning the Advanced Electrical Safety

In regards to the current updates of the National Electrical Code (NEC) the 18 bodies that serves as code-making panels are making the updates in certain regulations. The National Electrical Code (NEC) was published by the the National Fire Protection Association (NFPA) together with valuable insights and inputs across different concerned and related organizations. In these organizations, NEMA was one of them. The entire body has reviewed 3,730 public inputs since January 2018. The meetings have recently resulted to 1,406 changes together with the first revisions. Some part of the initial revisions that taken place could bring a significant changes and improvements in the advancements of electrical safety concerning the build environment. With these changes in mind, it will now go through the screening of NFPA’s concerned processes. 

The Regulations and Revisions including this are: 

  • Ground Fault Protection 

The Section included in the 210.8(A) in the 2017 NEC states that it is a requirement for a ground-fault circuit interrupter (GFCI) to have a holistic protection for 15-20 amp 125V receptacles that were installed in some of the locations that are being specified in (A) (1) to (10). Some of the proposed changes in this would make GFCI protection on all 125 V through 250 V receptacles. This is because there are numerous instances and scenarios where the individuals that had electrocution while having contact or interacting with 250V receptacles across all range outlets and dryers. 

  • Surge Protection Existing in Dwelling Units 

The latest proposed requirements for all 230.67(A) equipment should be required that their surge protective devices or (SPDs) should have installation across all existing unites for dwelling service panels. This should serve as a protective barrier for the electronics and other safety device that are installed inside some of the residential structures. 

  • Disconnection for Emergencies 

The latest proposition for the 230.85 requirement would require all the existing dwelling electrical units to give provision for the emergency disconnection of installed equipment in case of emergency at a readily and easy accessible location. This proposition was given and accepted because of what was given by the first-responder organization concerned. This will eventually help in protecting the firefighters that can bring them harm because of a possible arcing event if they will choose to go back to the removal of electrical meter whilst under a possible load. 

In conclusion, these code-making panel’s main task groups discussed the 1,932 public comments that coming directly from consumers submitted because in response to the panel’s action passed last January. In overall, the meetings conducted by the code-making panels are being scheduled to take place from October 21 to November 3,2018. 

About Energy Summit that was Discussed during the Motor Summit

If anyone would ask a child if they would allow themselves to have another scoop if their favorite ice cream flavor, they will instantly shout and yell “more please!” But even if they have shouted their hearts out, they will find out that too much ice cream can cause them stomach aches later on. This goes the same with the eve increasing energy-efficiency levels. 

Over the last 30 years of recent technology advancements, the motor industry has shown the ever increasing total efficiency of the typical 5 horse power motor from 85% wooping to total of 90%. However, the regulators in these machines together with some selected energy advocates are continuing to show or propose higher levels. There’s a requirement of certain changes concerning one area but some companies tend to forget how it should interact with the other aspect in the overall system. Because forgetting this important aspect can lead to reduced energy savings that comes with unintentional or unintended consequences. Like adding a favorite scoop of ice cream on top of the ice cream you just ate earlier, it is extremely unreasonable. To continue raising some of the efficiency levels on certain individual components does not necessarily mean that it should give greater savings on these energy products or it doesn’t mean having better products. 

While these energy motors can be manually tested and have some measurements be made, this does not mean that these tests would mean a great reduction in wasted energy. In addition and result of this observation the focus of NEMA organization has changed from doing a minimal increase in energy efficiency to focusing on a more important matter – total energy savings derived from this observation. 

The another possible conflict that may arrive at hurdling in developing an overall energy system is in terms of the possible the estimation and formulation efficiency standards to come up with the amount of energy used. This energy that is being used is intended to the usage of one certain product rather than having a single testing point for the product. In IEC 61800-9-2 Adjustable speed electrical power drive systems—Part 9-2: the Ecodesign for the energy power drive systems, power electronics and energy motor starters – the gage for the energy efficiency indicators should showcase some system drives for the several motor starters. The need for the testing of energy efficiency is shown in this regulation. Consequently, the additional steps for this is the shifting of the existing assumption and connotation that having more energy regulation is a better approach of the overall system. Compared to the idea that energy saving management achieved by having a better and more efficient management happening in the system level and increased compliance to the existing regulations, the current mantra should be shifted to this.  

In conclusion, energy companies should look at the overall savings incurred by having a more reliable system that computes the overall energy saved rather than looking at how much a device can efficiently save on energy. 

The Reliability of Having Speed Distribution and Automated Networks

Several utilities are geared in the researching for improved and better ways for the optimization and distribution of the feeder automation through the exploration of communication services via cellular devices. 

With this in mind, the distribution feeder automation systems were able to provide as many different classifications of solutions. The main reliability improved functions that were given by the said systems explicitly includes fault locate, isolate and service restoration, and automatic transfer functionality. However, these functions can be greatly improved through the reliability of some indicators about the distribution feeders. Thus this system can be greatly centralized, decentralized, or they can have combination of both. 

The centralized systems, in its traditional way, became the primary choice of solution in the automation of the distributed feeder networks. But the said systems have the tendency to react very slowly because they have to wait for the systems that can give protection from the disconnected faults that might exist in the network. The protection is needed by the personnel before they can take action on locating the faulted feeder segment and thus lead to the reconfiguration of the feeder, and supply of alternate power in some unaffected areas. 

Some of the decentralized systems, by contrast can provide the utmost capability to have the protection and automation have synchronized functionality that exists in some of the field devices that can give faster fault isolation and system configured actions. Thus, the good combination of the decentralized distribution automation (DA) systems together with the cellular communications can provide good possibilities for the increased reliability of some distributed networks. 

In the same way, some of the low-powered unlicensed radio devices and also the recently launched direct fiber-optic cable connections became the most common methods used in data communications of some DA applications. With this in mind, the electrical utilities that uses the cellular communications is not entirely new. As the matter of fact, the usual IT service that were being used to communicated some of the field device information data back to the utility are now being used in some various systems. Such as the advanced metering infrastructure (AMI). 

The utility of the cellular communication for the transmission of data that are being used in time-critical applications such as transfer trip (DTT), fault location isolation and service restoration (FLSIR) and Automatic Transfer System (ATS) can now be used by having the reliable communication happening between linking field device controllers be secured. 

In order to make this happen, the operational technology (OT) service type to give support to the unique requirements is critically crucial. Some of the OT systems are now deterministic in nature and should be able to give the actions based on information received. Such system should require security, dependable latency, and some reliability in line with what was recently established substation protection Standards.  


Guidelines in Making Data Quality for Process Control Systems

With the new technological developments in analytical instrumentation, the process plants today are now increasing in the usage of analyzers to help them improve operational efficiencies in process control systems. The important part of using this is it can contribute advancements in process control system in the way it understands and trust data that are being received from the analyzer.

The information received from a process analyzer provides a data hierarchy. The analytical device that can have one or more sub-controllers. These sub-controller learns to manage on or more streams. Each stream that is being analyzed is designed to get and achieve one or more measured component values.

The data hierarchy with explicit information gives meaning to the association between two sets of data. For example, if 1 stream is disabled it is then immediately obvious which are the components that will be affected. In a similar way, once the sub-controller is turned off the data hierarchy defines the streams that can be possibly be affected.

Engineers that understands the different data quality starts with the communication happening between the analyzer and the process control system. Once there are no major communication faults that will happen, engineers should look and understand the analyzer status and see if the analyzer is running properly if there is a possible error that will occur. In order for the stream 1 component to be validated, the analyzer should be quick in running the system with no faults. With this in mind, the stream 1 has should be set to online status. The other series of questions engineers should be asked are:

What will happen to the component values in the course of maintenance cycle? And;

Is the process control system should have at least the last value while the analyzer validation cycle is currently being performed?

To define the calibration cycle, it is the first and introductory step in changing the analytical results shown in a gas chromatograph. The other option is to set standard values to a not-a-number (NaN) in the course of doing a calibration. This is a crucial step to initiate re-initialising of the control algorithms when they return to their normal operation. The data quality in this logic should also be checked for a timeout.

Engineers can check the analyzer data quality in two steps. The first steps involve verification of the communications across all the analytical equipment. This is required to have and come up with results that are normally operating. Engineers should have clear understanding of the different analyzer modes, the fault signals, and error conditions.

The second step is designed to determine if the logic that verifies the data should come up with reasonable criteria. Once the instrument is able to detect concentrations in the outside of some process limits. The logic should be verified once the data is within a range that helps the engineers to come up with a sense of this process. There should be limitations (the physical ones) on how fast phased a value can change. In this case, a rate of change in the alarm system can be implemented.

The Different Advantages and Disadvantages of Auto Tuning Controls

The three historic challenges in tuning such as: single-loop, limited success of auto-tuning, modern difficulties of model-based control – all share similar root cause. The control engineering website published a two part series on the problems of auto tuning and its nature & definitions: Part 1 in the June 2018 issue and the advantages and disadvantages of autotuning control. Part 2 in August 2018 issue. Both of these articles are very informative read and these articles can make the accurate conclusions. However they miss one of the most important main implicaitons. There is one story about auto-tuning that reveals a very valuable lesson. 

The articles somehow conclude that auto-tuning still has no panacea. Engineers rightfully suspect this. Engineers regard this as the most important problem or challenge that comes with this are the unpredictable and nonlinear process. The former is the one wherein the actual process response tends to differentiate from the pre-identified response – this is where the tuning or model I snow based. In return, it shows that this tends to be accurate for most of the processes. This is the very reason why autotuning only got limited success regardless of the number of industry attempts. In which the actual process response differentiates poses for a fundamental conundrum for tuning and modeling. 

In addition to this, it helps to understand to explain the reason why single-loop tuning and multivariable control modeling most likely to recur maintenance in practical applications. In theory, they should be one-time engineering tasks. In this case, this is the long-held reality of loop tuning and now, it has emerged as the reality of the model-based control too. 

There are two popular solutions to these problems. But these solutions do not guarantee absolute answers to these problems. One solution idea deals with average model or average tuning. Most engineers regard this as the best way to deal with this problem, however it did not solve the problem engineers have today. The second idea for solution is the autotuning or adaptive modeling. This process has a potential to bring more problematic solutions than averaging, because the basic tuning for today may not be applicable or appropriate tomorrow. 

In the vernacular sense, the process gains are subject to change. Majority – if not all – gains can be frequently or dynamically changed because of the presence of everyday disturbances in some process conditions. The returning and remodeling remain as the commonplace as they do. Adding to the limited success of autotuning, engineers testify to this. It is a common sense that people spent years doing troubleshooting process to control its performance. However, autotuning cannot solve this recurring problem. The users should take a look at the existence of the emergence of adaptive modelling, which in case attempts to do the very same thing on a larger scale – now with a critical eye.  

Guidelines in Choosing the Best Industrial Automation Controller

There are several most important items that engineers should consider when they are in the process of choosing controller for machines and automation processes. The break down of the equipment’s needs for operations requires a starting point that helps the equipment to have analysis of the different ranges of controllers being specified by OEMs or machine makers. This depends on the way the equipment is being operated to fit in the larger manufacturing environment, the system can be automated to give and provide concrete & complete solution to have control on the individual parts. 

The controller specified for this is the programmable logic controller (PLC) or the programmable automation controller (PAC.) These control systems can have control over a single station, machine, process unit, assembly line or even the entire plant which it was installed. Once the integrated manufacturing system is being automated, the single control systems controller can use multiple expansion and remote input/output (I/O) bases. This now having communications that uses Ethernet that can give end-to-end control. On the other hand, this application may require the compartmentalization of different automation systems that can be broken in the system. This can lead to multiple, logical sections. However in this case, the automation should be spread among much smaller PLCSs or micro ones but this depend on the demand and the functionality. 

Most of the automation engineers will see this as the irreversible decision they can make between two choices. This can also have vastly different platforms but this not have to be the only case. Some of the controller systems offer different platforms, size options that uses or utilizes the same programming software. However, the single programming environment can offer larger application flexibility that can help companies save time and money. These programs can be easily converted or located from one PLC to another PLC to adapt to the compatibility of the projects. 

One of the hardest part of deciding is if a single program should be operated on a large PLC or to have the same project operate on smaller PLCS. Each of this should only have the parts executed that are needed to run by the specific subsystem. 

With this in mind, this is much complex compared to picking PLC, PAC, or PC-based controller-size that depends on its characteristics, capabilities, and other functions. In order to help engineers in deciding which is the best controller to be used for this kind of application, they should consider the following factors: 

Automation needed for new system 

Existing environmental issues (depending on the application) 

Communication and Programming 

Location of I/Os 

Analog and Discrete Devices 

Loop Controls 

However, regardless whether the system is new or not, it can dictate several critical factors for selection. Once these products are already installed, it is a recommended practice to come up with a new system that is compatible with the existing one. Small amount of controller products can possibly come from the same manufacturing plant, and they may not be compatible with other controllers.