**Error Correction using Syndrome Vector**

** **In order to use the syndrome vector for the error correction, we follow a procedure as under :

**Error correction using syndrome vector**

(i) For a transmitted code vector X = (0 1 0 0 1 1 0), we obtain the received code vector Y = (0 1 ① 0 1 1 0), assuming the error to be in the 3rd position.

(ii) We calculate the corresponding syndrome vector as under :

S = YH^{T}

(iii) We obtain the error vectors as S = EH^{T} i.e. S = E.

(iv) From the syndrome vector, we obtain the error vector.

(v) From the error vector, we obtain the transmitted signal as under

X = Y Å E

**EXAMPLE 10.9. To clear above concept of error correction using syndrome vector, let us consider one particular example. For this, let us use the following parity check matrix: **

[H] =

**Solution :** (i) First, we obtain the received code vector Y.

Assuming X = (0 1 0 0 1 1 0) to be the transmitted code vector.

Let the received code vector be obtained by assuming that the third bit is in error.

Thus, Y= (0 1 ① 0 1 1 0), Here, encircled bit represents error.

(ii) Next, let us determine the corresponding syndrome vector.

We know that,

Syndrome S = YH^{T}

S = [0 1 0 1 1 0]

We have S = (0 Å 1 Å 1 Å 0 Å 1 Å 0 Å 0, 0 Å 1 Å 0 Å 0 Å 0 Å 1 Å 0,

0 Å 0 Å 1 Å 0 Å 0 Å 0 Å 0)

or S = [1, 0, 1]

This is the syndrome vector for the given received signal. This corresponds to the 3^{rd} row of the transpose matrix H^{T}.

(iii) But S = YH^{T} = EH^{T}

Therefore, EH^{T} = [1, 0, 1]

(iv) Let us obtain the error vector for the syndrome vector S = [1, 0, 1].

From table 10.6, the error vector corresponding to the syndrome (1, 0, 1) is given by

E = (0 0 1 0 0 0 0)

The shows that the error is present in the third position of the received code vector Y.

(v) Now, let us correct the error.

The correct vector *X* can be obtained as under :

X = Y Å E

Similarly the values of Y and E, we obtain

X= [0 1 1 0 1 1 0] Å [0 0 1 0 0 0 0]

or X= [0 1 0 0 1 1 0]

This is same as the transmitted code vector.

**NOTE** Thus by modulo-2 addition of received signal with the error vector E, we can obtained the correct code word. Hence, one single error can be corrected.

**10.13.2. Syndrome Decoder for (n, k) Block Codes**

**Block Diagram**

** **The block diagram of a syndrome decoder for (n, k) block codes for correcting errors is shown in figure 10.23.

**diagram**

**FIGURE 10.23** *Syndrome decoder for linear block code.*

**Working Operation**

DO YOU KNOW? |

For a fixed modulation scheme, addition of redundancy in the coded messages implies the need for increased transmission bandwidth. |

The received n-bit code word Y is stored in an n-bit register. This code vector is then applied to a syndrome calculator to calculate syndrome S = YH^{T}. In order to obtain the syndrome, the transposed parity check matrix, H^{T} is stored in the syndrome calculator. The (n – k) bit syndrome vector *S* is applied to the look-up, table containing the error patterns. An error pattern is selected corresponding to the particular syndrome *S* generated at the output of the syndrome calculator. The selected error pattern *E* is then added (modulo-2 addition) to the received signal Y to generate the corrected code vector X.

Therefore, X = Y Å E

**10.13.3. Decoding of a Linear Block Code**

*(U.P. Tech, Sem., Exam. 2004-05, 2005-06) (10 Marks) *

**Basic Objective**

The basic objective of channel coding is to detect and correct errors when messages are being transmitted over a noisy channel. The noise can randomly alter some of the symbols in the transmitted code word, (i.e., from 1 to 0 or from 0 to 1). If only one symbol is changed as shown in figure 10.24, then the Hamming distance between the original code word and the erroneous code word is 1.

**diagram**

**FIGURE 10.24** Relation between number of errors and Hamming distance.

If the noise transforms *t* symbols (i.e., if *t* symbols in the received code word are in error), then the Hamming distance of the received word will be *t* with respect to the correct code word.

**Number of Detectable Errors**

An error will be detected as long as it does not transform one code word into another valid code word. If the minimum distance between the code word is d_{min} then the weight of the error pattern must be d_{min} or more to cause transformation of one code word to another. The error detection is always possible when the number of transmission errors in a code word is less than the minimum distance d_{min}, because then the erroneous word is not a valid code word. But when the number of errors equals or exceeds d_{min}, the erroneous code word may correspond to another valid code word and errors cannot be detected. The error detection and correction capabilities of a coding technique depend on the minimum distance d_{min}.

A code having a minimum distance d_{min} will detect all the non-zero error patterns of weight less than or equal to (d_{min} – 1). Hence, we can say that upto s errors per word can be detected such that

d_{min} > (s + 1)

or s < (d_{min }– 1) … (10.26)

Moreover, there is at least one error pattern of weight d_{min} which cannot be detected. This corresponds to two code words which are the closest. It is possible that some error patterns of weight d_{min} are detected but all error patterns of weight d_{min} will not be detected.

**EXAMPLE 10.9. For the code X _{1} = (000, 111) how many errors can be successfully detected ? **

**Solution :**First, let us find the minimum distance.

It is given that the code consists of two code words 000 and 111. Hence, minimum distance is 3 as shown in figure 10.25.

**diagram**

**FIGURE 10.25**

First, let us calculate the number of detectable errors.

The number of detectable errors is given by

s ≤ (d

_{min}– 1)

or s ≤ 3 -1

or s ≤ 2

Thus, at the most two or upto two errors can be detected by this code. This means that any error pattern belonging to the following set can be detected by this code.

Hence,

*s*= {011, 101, 110, 001, 010, 100}

**Number of Correctable Errors**

Let us now see, how many errors can be successfully corrected by a code. This will be based on the best possible guess about the original transmitted code word. It is logical to guess that a valid code word which is nearest (in terms of Hamming distance) to the received code word must have been actually transmitted. This technique is called as **Nearest Neighbour Decoding**

But, the possible problem with this technique is that it is possible that more than one code word is at the same Hamming distance from the received word. In this situation, the receiver does the following :

(i) It can pick up one of the equally distant code words on a random basis or

(ii) It can request the transmitter to retransmit.

To ensure that the received word having at most *t* errors, is closest to the original code word, we put the following condition on the minimum distance.

d_{min} ≥ 2*t* + 1 … (10.27)

Hence, the number errrs that can be successfully corrected by a code is given by

* t **≤* * *…(10.28)

**Summary of error detecting and correcting capability**

** **The error detecting and correcting capability of a code is summarized in Table 10.8.

**Table 10.8. Role of d _{min} for detection and correction of errors**

S.N. |
Description |
Expresesion |

1. | Detect upto s errors per word |
d_{min} ≥ (s + 1) |

2. | Correct upto t errors per word | d_{min} ≥ (2t + 1) |

3. | Correct upto t errors and detect s > t errors per word |
d_{min} ≥ (t+s+1) |

**Graphical representation for the condition d**_{min}**≥***2t + 1*

Figure 10.26 shows the graphical representation of the condition

d_{min} ≥ *2t + 1*

Each *n* bit code word (i.e., code vector) can be represented as a point in the space. All the words at a Hamming distance of t or less would lie within the sphere which is centered at the code word and with a radius of *t*. If the minimum distance of the code is d_{min }and the condition d_{min} ≥ *2t + 1* is satisfied, then none of these spheres will intersect. Any received vector which is basically a point within a specific sphere will be closest to its centre (which represents a code word) than any other code word. The spheres associated with each code word is called as its **Decoding sphere.** Hence, it is possible to decode the received vector using the nearest neighbour method.

Figure 10.26 tells us that the code words within the sphere of radius *t* and centered at x_{1} will be decoded as x_{1}. For decoding without any ambiguity, we have

d_{min} ≥ *2t* + 1

**diagram**

**FIGURE 10.26** *Decoding spheres.*

**EXAMPLE 10.10. An error control code has the following parity check matrix. **

*H*** = **

** (i) Determine the generator matrix G **

** (ii) Find the code word that begin with 101 … **

** (iii) Decode the received code word 1 1 0 1 1 0. Comment on error detection capability of this code. **

**Solution :** From the parity check matrix [H]_{3 x 6}, it obvious that this is a (6, 3) linear block code.

Therefore, n = 6, k = 3 and (n – k) = 3.

(i) We know that the parity check matrix is given by,

[H] = [P^{T} : *I*_{n-k}]_{n – k x n}

or [H] 3×6 = [P^{T} : I_{3}]_{3×6}

Using the given expression for H, we obtain

**equation**

or P* ^{T}* =

Therefore, the transpose of matrix P

^{T}is given by,

P =

We know that the generator matrix is given by,

**equation**

This is the required generator matrix.

(ii) The message vector is given by,

M = [1 0 1]

The three parity hits are obtained by using the following standard expression:

Therefore, C= MP

[c

_{0}c

_{1}c

_{2}] = [m

_{0}m

_{1}m

_{2}] [P]

or [c

_{0}c

_{1}c

_{2}] = [m

_{0}m

_{1}m

_{2}]

or c

_{0}= (m

_{0}x 1) Å (m

_{1}x 0) Å (m

_{2}x 1) = m

_{0}Å m

_{2}

Substituting m

_{0}= 1 and m

_{2}= 1, we obtain

c

_{0 }= 1 Å 1 = 0

Similarly, c

_{1}= (m

_{0}x 1) Å (m

_{1}x 1) Å (m

_{2}x 0) = m

_{0}Å m

_{1}

Substituting, m

_{0}, = 1 and m

_{1}, = 0, we obtain

c

_{1}= 1 Å 0 = 1

and c

_{2}= (m

_{0}x 0) Å (m

_{1}x 1) Å (m

_{2}x 1) = m

_{1}Å m

_{2}

or c

_{2}= 0 Å 1 = 1

Therefore, the parity word is C = [0 1 1].

Hence, the complete codeword is given by,

X =

**Ans.**

(iii) The received code word Y = 1 1 0 1 1 0

Therefore, the syndrome is given by

S = YH

^{T}

Substituting for Y and H

^{T}, we obtain

S = [110110]

or S = [1 Å 0 Å 0 Å 1 Å 0 Å 0, 1 Å 1 Å 0 Å 0 Å 1 Å 0, 0 Å 1 Å 0 Å 0 Å 0 Å 0]

or S = [0, 1, 1]

This is same as the second row of the transpose matrix H

*, which indicates that there is an error in the second bit of the received signal, i.e.,*

^{T}**EQUATION**

Therefore, the correct code word X = 1 0 0 1 1 0.

**Ans.**

The correct code word is obtained by replacing the second bit by a 0.

(iv) It is possible to verify that this code has a minimum d

_{min}= 3. The relation between d

_{min}and the number of errors that can be detected is :

d

_{min}≥ s + 1

For d

_{min}= 3, we have

3 ≥ s + 1

or s ≤ 2

This means that upto two, errors can be detected and d

_{min}2t + 1

Or 3 ≥

*2t*+ 1 or t ≤ 1.

This means that upto one, error can be corrected.