The electrical engineer's library

Tuesday, March 10, 2026

Resistor Color Codes Diagram

 This colorful diagram shows us how to calculate resistor values; click for a bigger image size:

Enjoy!!!


Thursday, August 25, 2022

Features of PID controllers

Summary: PID control

Proportional control, is the simplest to implement, but not always sufficient for stabilization.

Derivative control, helps achieve stability, improves time response, i.e., more control over pole locations.

PD control gives arbitrary pole placement (only valid for second order response); in general though, we still have control over two dominant poles.

Derivative control cannot be implemented directly, so need approximate implementation; D-control also amplifies noise.

Integral control, is essential for perfect steady-state tracking of constant reference and rejection of constant disturbance.


What is the role of P, I and D in a PID controller?

Understanding the roles is important while tuning.

Proportional (P) parameter

The P-action is proportional to the error or the PV. The error (or PV) is multiplied with the proportional gain and added to the controller output. The P-action gives the output a ‘kick’ in the right direction.

If the error value is zero, then the P action is zero. This implies that a controller with only P action needs a non-zero error to have a non-zero output. Accurate tracking is therefore not possible with only P control.

Integral (I) parameter

Consider a plot of the error between PV and SP over time. In mathematics, the “integral” of the error can be interpreted as the surface between the curve and the x-axis and between the y-axis and the current time instant.  Every time step, the plot extends a bit to the right. If the error is zero at that time, the surface does not increase and the integral remains constant.

If your error is positive the surface below the error curve will increase, resulting in a higher controller output. The I action will decrease when the error becomes negative.

Typically the I-action will act much slower compared to the proportional action. However, it will bring the error to zero eventually, which the proportional action can’t do. So basically, the integral action looks at the past and checks if the error is getting to the setpoint. If not, it’s acting on the output. It is going to steer the wheel until you are heading in the intended direction.

Derivative (D) parameter

The integral doesn’t have the possibility to predict the behavior of error. The derivative action addresses this problem by anticipating the future behavior of the error.

So, the derivative action is the change of the error. It adds a contribution to the output according to how the error changes. When the error is positive, but is starting to decline the D action, it will reduce the output of the controller. It’s the brake that tries to avoid overshoot. It reduces the oscillations induced by the other two actions. It can speed up the controller to the setpoint that we want to achieve.

It reduces the oscillations induced by the other two actions. It can speed up the controller to the setpoint that you want to achieve. However, the derivative action is not often used in PID tuning. The problem is that it can amplify noise. If the error signal is very noisy, the controller output tends to oscillate a lot. This can negatively affect the lifetime of the equipment like pumps and valves.

Wednesday, August 10, 2022

Basic characteristics of lead, lag, and lag-lead compensation

Automatic Control Notes (Control Systems) for Electrical, Systems and Aerospace Engineering, made by me.


Basic characteristics of lead, lag, and lag-lead compensation

Lead compensation essentially produces a noticeable improvement in transient response and little change in steady-state accuracy. You can accentuate the effects of high-frequency noise. On the other hand, delay compensation produces a significant improvement in steady-state accuracy at the cost of increasing transient response time. Suppresses the effects of high-frequency noise signals. Lead-Lag Compensation combines the features of Lead Compensation with those of Lag Compensation. The use of a lag or lead compensator increases the system order by 1 (unless there is a cancellation between the zero of the compensator and a pole of the uncompensated open-loop transfer function). The use of a lag-lead compensator increases the order of the system by 2 [unless there is a cancellation between the zero(s) of the lag-lead compensator and the pole(s) of the transfer function at uncompensated open loop], which means that the system becomes more complex and it is more difficult to control the behavior of the transient response. The particular situation determines the type of compensation that should be used.


Some comments on delay compensation

1. Delay compensators are essentially low-pass filters. Thus, lag compensation allows high gain at low frequencies (which improves steady-state behavior) and reduces gain in the higher critical frequency range to improve phase margin. Note that the delay compensation uses the attenuation characteristic of the delay compensator at high frequencies, instead of the phase delay characteristic. (The phase delay characteristic is not for compensation purposes.)

2. Suppose the zero and pole of a lag compensator are located at s=-z and s=-p, respectively. Then the exact location of the zero and pole is not critical, provided that they are close to the origin, and that the ratio z/p is equal to the required multiplication factor of the static velocity error constant.

However, it should be noted that the lag compensator zero and pole should not be unnecessarily close to the origin, because the lag compensator will create an additional closed-loop pole in the same region as the lag compensator zero and pole.

The closed-loop pole near the origin provides a very slow decaying transient response, although its magnitude becomes very small because the zero of the lag compensator almost cancels the effect of this pole. However, the transient response (decay) due to this pole is so slow that the settling time will be negatively affected.

It is also noted that, in the system compensated by the lag compensator, the transfer function between the plant disturbance and the system error may not contain a zero near this pole. Therefore, the transient response to the disturbance input may last for a long time.

3. The attenuation due to the delay compensator shifts the gain crossover frequency to a lower frequency where the phase margin is acceptable. Therefore, the delay compensator reduces the bandwidth of the system and causes a slower transient response. [The phase curve of Gc(ju)G(ju) is relatively unchanged around and above the new gain crossover frequency.]

4. Since the lag compensator tends to integrate the input signal, it acts more or less like a proportional integral controller. For this reason, a delay compensated system tends to become less stable. To avoid this undesirable feature, the time constant T should be sufficiently larger than the largest time constant of the system.

5. Conditional stability can occur when a system that has saturation or limitations is adjusted using a delay compensator. When clipping or clipping occurs in the system, the effective loop gain is reduced. Thus, the system becomes less stable and may even operate unstable, as shown in Figure 7-108 (textbook, page 511). To avoid this, the system should be designed so that the effect of delay compensation becomes significant only when the amplitude of the input to the clipping element is small. (This is achieved with compensation via an internal feedback loop.)


Comparison of lag, lead, and lag-lead tradeoffs

1. Lead compensation provides the desired result through its contribution to leading the phase, while lag compensation achieves the result through its attenuation property at high frequencies. (In some design problems, lag compensation and lead compensation may meet specifications.)

2. Lead compensation is often used to improve stability margins. Lead compensation gives a higher gain crossover frequency than can be obtained with lag compensation. Higher gain crossover frequency means higher bandwidth. A large bandwidth implies a reduction in settling time. The bandwidth of a lead-compensated system is always greater than that of a lag-compensated system. Therefore, if a large bandwidth or fast response is desired, lead compensation should be used. However, if noise signals are present, a large bandwidth may not be suitable, because this makes the system more sensitive to noise signals, due to increased gain at high frequencies.

3. Lead compensation requires an additional increase in gain to compensate for the attenuation inherent in the lead network. This means that lead compensation requires a higher gain than lag compensation requires. Higher profit almost always means more space, more weight and higher cost.

4. Lead compensation can generate large signals in the system. These signals are not desirable because they can cause saturation in the system.

5. Delay compensation reduces system gain at high frequencies without reducing it at low frequencies. As the system bandwidth is reduced, the system responds at a slower speed. Due to the reduced gain at high frequencies, the total gain of the system is increased, and thus the gain at low frequencies is also increased and thus the steady-state accuracy is improved. Also, high-frequency noise contained in the system is attenuated.

6. Delay compensation introduces a pole-zero combination near the origin that generates a long tail of small amplitude in the transient response.

7. If fast responses and sufficient static accuracy are desired, a lead-lag compensator can be used. This compensator increases the gain at low frequencies (which means an improvement in steady-state accuracy) and, at the same time, increases the bandwidth and stability margins of the system.

8. Although a large number of practical compensation tasks can be accomplished with lead, lag, or lag-lead compensators, for complicated systems, simple compensation using these compensators may not produce satisfactory results. In these cases, different compensators with different pole and zero configurations must be used.


Recommended texts:

1. Ogata, "Modern Control Engineering."

2. Ogata, "Discrete-Time Control Systems".

3. Di Steffano, Stubberud & Williams, "Schaum's Outlines of Theory and Problems of Feedback and Control Systems", Schaum's Outlines series.

4. Nise, "Control Systems Engineering".

Definitions for the BER (bit error rate) parameter table

Telecommunications Notes for Electrical, Electronic and Telecommunications Engineering, made by me.

Definitions for the BER (bit error rate) parameter table

Q(z) = 1/sqrt(2*Pi) int(z..infinity) exp(-u^2/2) du

If there is noise at the receiver input (assumed to be additive white Gaussian noise), the following is sampled at the output:

output = r_o = s_o + n_o

where s_o is a constant (s_o1 for a sent 1, s_o2 for a sent 0) and n_o is a zero-mean Gaussian random variable (the noise component). The constants s_o1 and s_o2 are associated with known input signaling waveforms s_1(t) and s_2(t), for a given type of receiver.

N_o/2

is the power spectral density (PSD) of the noise at the receiver input. Calling:

E_d

to the energy of the difference signal s_d(t)=s_1(t)-s_2(t) at the input of the receiver:

E_d = int(0..T) {s_1(t)-s_2(t)}^(1/2) dt

then the average energy per bit E_b is defined as a certain function of E_d that depends on the type of signaling.


Example.

For bit reception via QPSK-type bandpass signaling, according to the table (see textbooks), the minimum TX bandwidth required is R/2 where R is the bit rate and the BER is:

Q[sqrt(2*(E_b/N_o))]

requiring coherent detection.


Example.

For OOK bandpass signaling bit reception, the minimum TX bandwidth required is the bit rate R and the BER is:

Q[sqrt(E_b/N_o)]

for coherent detection, and:

1/2*exp[-1/2*(E_b/N_o)]

for non-coherent detection, and it must be true that (E_b/N_o) > 1/4.


Example.

There is a reception of the unipolar signaling type with white Gaussian noise at the input of a receiver filter, where E_b = A^2/(2R), the bit rate (data rate) is R = 9600 bps, the PSD (density power spectral) noise is 3*10^-5. Calculate (E_b/N_o) at the input of the filter, the minimum bandwidth and the corresponding BER.

Solution.

(E_b/N_o)_dB = 10 log_10 (E_b/N_o) = 10 log_10 ([A^2/(2R)]/[6*10^-5])

since PSD = N_o/2 = 3*10^-5, then N_o = 2*PSD = 6*10^-5. I mean:

= 10 log_10 (A^2/[2R*6*10^-5]) = 10 log_10 (A^2/[12*10^-5*9600]) = 10 log_10 (A^2/1.152) = 10 [2 log_10 A - log_10 1.152] = 10[2 log_10 A - 0.06145] dB

The minimum transmission bandwidth required is:

R/2 = 9600/2 = 4800bps

The BER is:

Q[sqrt(E_b/N_o)] = Q[sqrt([A^2/(2R)]/[6*10^-5])] = Q[sqrt(A^2/[12*10^-5 *9600])] = Q[sqrt(A^2/1.152)] = Q[A/sqrt(1.152)] = Q[A/1.0733]

For example, if parameter A = 5, then:

(E_b/N_o)_dB = 10[2 log_10 A - 0.06145] = 10[2 log_10 5 - 0.06145] = 13.4 dB

and also:

BER = Q[A/1.0733] ​​= Q[5/1.0733] ​​= Q[4.6585] = 1/sqrt(2*Pi) int(4.6585..infinity) exp(-u^2 /2) du = 0.000001593 = 1.6*10^-6


Sources:

1. Couch, "Digital and Analog Communications Systems."

2. Hsu, "Schaum's Outlines of Theory and Problems of Analog and Digital Communications", Schaum's Outlines series.

I highly recommend these two texts to learn Telecommunications; start with Schaum's first, then study the other!

Saturday, May 7, 2022

Random stories - Playing with my first PC (1999-2004)

 Year 1999, month of July. The 100 MHz speed PC with 32 Mb of RAM with Windows 95 arrives and I learn to do some wonders on it, I mean, I started to become little by little an advanced Windows user. At first, the screen did not look very good, since the video card (as I later realized) was a Cirrus Logic that was damaged: as time passed, some spots began to appear in the pixels of the screen. Of course, the screen resolution did not exceed 640x480 (although at that time it was enough for everything I did) and you could not put more than 16 colors deep, so any graphic with many colors looked bad, unless it was dither filtered. As if this wasn't enough, the monitor (Markvision brand, model VC4968, I think) was supposed to be Super VGA with the proper drivers but it couldn't be used at that kind of resolution. One day, apparently in the summer of 2000, I went to the university where I was studying Electrical Engineering (a degree I never finished) to a PC room to download the drivers from the internet: they did not exist.

But one day, they gave me an S3 video card (it was quite big) and I installed it myself, even though I knew very little about “irons” on the PC. It worked, and it was great to finally be able to see 16-bit color; I could already appreciate the graphics in all their splendor. Also, because the resolution was low (it currently is, back then it was normal) the screen looked extremely sharp and easy on the eyes.

Regarding the sound (my dream, during my high school years, was to compose and produce music using computers and synthesizers), I had an ESS Audiodrive 1868 sound card. The grace of this card is that it had a MIDI sound of high quality, of the type Yamaha OPL3 compatible. So one of the first things I downloaded from the internet (at the time, from college) was MIDI files that could be played. I brought everything on floppy disks, put them in the floppy drive and copied the MIDIs to folders. Thus, I enjoyed some very high quality speakers that gave me the MIDI sound of the Yamaha OPL3 type in all its splendor. The basses sounded very special. Among other things, there was music from the Super Mario series video games (notable were the MIDIs from Super Mario 64) and a collection of MIDIs from the dance-electronic genre, some of which sounded pretty good for OPL3-type soundings. But it didn't end there: I searched until I found a program to learn to compose music without really being a musician (I've always been very clumsy with instruments). That's how I got to compose a series of MIDIs (pretty bad and primitive, by the way) with Anvil Studio. It was laborious to set up the sound channels with the mouse but the first time everything sounded great. Another program I tried was Fractmus 2000: it generated MIDI music using algorithms based on fractals (music generated by mathematical formulas). I had interesting examples of that kind of music and I remember showing it to a friend from uni while we were in a computer room with Windows NT 4.0 (Pentium 133, apparently, with 64 Mb of RAM): I plugged in some headphones that I used on my “personal” (cassette walkman, which was used in those years before the advent of walkman CDs) and showed him some files.

The year 2000 arrived, the year in which a critical period in my life began (no details, sorry) that lasted quite some time. In the month of April of that year I learned to use a program called Fasttracker II version 2.09. Goodbye MIDIs, because this one used a format called XM: an extension of the MOD format that is basically similar to MIDI but includes sound samples in WAV format inside them, which weighs much more than MIDI. That's how I got into the hobby of music production.

I used Windows 95 with Office 97 installed, so I did my first assignments for some elective courses (very low-demand courses, 5 credits at the time) with the already mythical Times New Roman font that everyone used (if not occupied Arial). He played with the Office “wizards”: I quite liked Einstein, especially when he left the screen sneezing when you closed the wizard (it was very funny).

For the most demanding courses, I bought a CD from a fellow student who sold software: Maple version 5 release 4 (mathematics program: symbolic calculation and that kind of thing), powerful and very pleasant to use. At the house of a cousin of a friend of mine we installed MATLAB (it was a version that was installed from 5 diskettes, with this friend we took a Linear Algebra course with MATLAB laboratory included and we gave 5 diskettes to the teacher so that he could copy the program for us) and we learned to use it for the Linear Algebra course.

I had an HP printer (can't remember the model) to print. It didn't print much but it was enough for everything I needed. To print documents using several pages per page, we used the FinePrint program (the one that was used in a shareware version: the message that it was used and could be purchased at the bottom of each printed page always came out).

Resistor Color Codes Diagram

 This colorful diagram shows us how to calculate resistor values; click for a bigger image size: Enjoy!!!