EmbLogic's Blog

How to Replace a Switch-Statement with function pointer

float Plus (float a, float b) { return a+b; }
float Minus (float a, float b) { return a-b; }
float Multiply(float a, float b) { return a*b; }
float Divide (float a, float b) { return a/b; }

// Solution with a switch-statement – specifies which operation to execute
void Switch(float a, float b, char opCode)
{
float result;

// execute operation
switch(opCode)
{
case ‘+’ : result = Plus (a, b); break;
case ‘-’ : result = Minus (a, b); break;
case ‘*’ : result = Multiply (a, b); break;
case ‘/’ : result = Divide (a, b); break;
}

cout << "Switch: 2+5=" << result << endl; // display result
}

// Solution with a function pointer – is a function pointer and points to
// a function which takes two floats and returns a float. The function pointer
// “specifies” which operation shall be executed.
void Switch_With_Function_Pointer(float a, float b, float (*pt2Func)(float, float))
{
float result = pt2Func(a, b); // call using function pointer

cout << "Switch replaced by function pointer: 2-5="; // display result
cout << result << endl;
}

// Execute example code
void Replace_A_Switch()
{
cout << endl << "Executing function 'Replace_A_Switch'" << endl;

Switch(2, 5, /* '+' specifies function 'Plus' to be executed */ '+');
Switch_With_Function_Pointer(2, 5, /* pointer to function 'Minus' */ &Minus);
}

Posted in Character Driver | Leave a comment

What is Function Pointer

A function pointer (or subroutine pointer or procedure pointer) is a type of pointer supported by third-generation programming languages (such as PL/I, COBOL, Fortran,[1] dBASE dBL, and C) and object-oriented programming languages (such as C++ and D).[2]

Instead of referring to data values, a function pointer points to executable code within memory. When dereferenced, a function pointer can be used to invoke the function it points to and pass its arguments just like a normal function call. Such an invocation is also known as an “indirect” call, because the function is being invoked indirectly through a variable instead of directly through a fixed name or address.

Function pointers can be used to simplify code by providing a simple way to select a function to execute based on run-time values.

The simplest implementation of a function (or subroutine) pointer is as a variable containing the address of the function within executable memory. Older third-generation languages such as PL/I and COBOL, as well as more modern languages such as Pascal and C generally implement function pointers in this manner.[3]

The following C program illustrates the use of two function pointers:

func1 takes one double-precision (double) parameter and returns another double, and is assigned to a function which converts centimetres to inches
func2 takes a pointer to a constant character array as well as an integer and returns a pointer to a character, and is assigned to a string library function which returns a pointer to the first occurrence of a given character in a character array

#include /* for printf */
#include /* for strchr */

double cm_to_inches(double cm) {
return cm / 2.54;
}

int main(void) {
double (*func1)(double) = cm_to_inches;
char * (*func2)(const char *, int) = strchr;
printf(“%f %s”, func1(15.0), func2(“Wikipedia”, ‘p’));
/* prints “5.905512 pedia” */
return 0;
}

The next program uses a function pointer to invoke one of two functions (sin or cos) indirectly from another function (compute_sum, computing an approximation of the function’s Riemann integration). The program operates by having function main call function compute_sum twice, passing it a pointer to the library function sin the first time, and a pointer to function cos the second time. Function compute_sum in turn invokes one of the two functions indirectly by dereferencing its function pointer argument funcp multiple times, adding together the values that the invoked function returns and returning the resulting sum. The two sums are written to the standard output by main.

#include
#include

// Function taking a function pointer as an argument
double compute_sum(double (*funcp)(double), double lo, double hi) {
double sum = 0.0;

// Add values returned by the pointed-to function ‘*funcp’
int i;
for(i = 0; i <= 100; i++) {
double x, y;

// Use the function pointer 'funcp' to invoke the function
x = i / 100.0 * (hi – lo) + lo;
y = (*funcp)(x);
sum += y;
}
return sum / 101.0;
}

int main(void) {
double (*fp)(double); // Function pointer
double sum;

// Use 'sin()' as the pointed-to function
fp = sin;
sum = compute_sum(fp, 0.0, 1.0);
printf("sum(sin): %f\n", sum);

// Use 'cos()' as the pointed-to function
fp = cos;
sum = compute_sum(fp, 0.0, 1.0);
printf("sum(cos): %f\n", sum);
return 0;
}

Posted in Data Structures with C | Leave a comment

What are the different types of C pointers?

NULL Pointer
Dangling Pointer
Generic Pointers
Wild Pointer
Complex Pointers
Near Pointer
Far Pointer
Huge Pointers

NULL Pointer :

Literal meaning of NULL pointer is a pointer which is pointing to nothing. NULL pointer points the base address of segment.

Examples of NULL pointer:

int *ptr=(char *)0;
float *ptr=(float *)0;
char *ptr=(char *)0;
double *ptr=(double *)0;
char *ptr=’’;
int *ptr=NULL;

NULL is macro constant which has been defined in the heard file as :
#define NULL 0

Dangling pointer :

If any pointer is pointing the memory address of any variable but after some variable has deleted from that memory location while pointer is still pointing such memory location. Such pointer is known as dangling pointer and this problem is known as dangling pointer problem.

Generic pointer :

void pointer in c is known as generic pointer. Literal meaning of generic pointer is a pointer which can point type of data.

Example:
void *ptr;

Here ptr is generic pointer.

We cannot dereference generic pointer.
We can find the size of generic pointer using sizeof operator.
Generic pointer can hold any type of pointers like char pointer, struct pointer, array of pointer etc without any typecasting.
Any type of pointer can hold generic pointer without any typecasting.
Generic pointers are used when we want to return such pointer which is applicable to all types of pointers. For example return type of malloc function is generic pointer because it can dynamically allocate the memory space to stores integer, float, structure etc. hence we type cast its return type to appropriate pointer type.

Wild pointer :

A pointer in c which has not been initialized is known as wild pointer.

Complex pointer :

Pointer to function
Pointer to array
Pointer to array of integer
Pointer to array of function
Pointer to array of character
Pointer to array of structure
Pointer to array of union
Pointer to array of array
Pointer to two dimensional array
Pointer to three dimensional array
Pointer to array of string
Pointer to array of pointer to string
Pointer to structure
Pointer to union
Multilevel pointers

In TURBO C there are three types of pointers. TURBO C works under DOS operating system which is based on 8085 microprocessor.

1. Near pointer
2. Far pointer
3. Huge pointer

The pointer which can points only 64KB data segment or segment number 8 is known as near pointer.

The pointer which can point or access whole the residence memory of RAM i.e. which can access all 16 segments is known as far pointer.

The pointer which can point or access whole the residence memory of RAM i.e. which can access all the 16 segments is known as huge pointer.

Posted in Uncategorized | Leave a comment

Pointers Questions

1) What will be the output of following program ?
#include
int main()
{
char *str=”IncludeHelp”;
printf(“%c\n”,*&*str);
return 0;
}

Correct Answer
I
& is a reference operator, * is de-reference operator, We can use these operators any number of times. str points the first character of IncludeHelp, *str points “I”, * & again reference and de-reference the value of str.

2)
#include
int main()
{
int iVal;
char cVal;
void *ptr; // void pointer
iVal=50; cVal=65;

ptr=&iVal;
printf(“value =%d,size= %d\n”,*(int*)ptr,sizeof(ptr));

ptr=&cVal;
printf(“value =%d,size= %d\n”,*(char*)ptr,sizeof(ptr));
return 0;
}

Correct Answer
value =50,size= 4
value =65,size= 4
void pointer can be type casted to any type of data type, and pointer takes 4 bytes (On 32 bit compiler).
to print value using void pointer, you will have to write like this *(data_type*)void_ptr;.

3)
#include
int main()
{
char *str []={“AAAAA”,”BBBBB”,”CCCCC”,”DDDDD”};
char **sptr []={str+3,str+2,str+1,str};
char ***pp;

pp=sptr;
++pp;
printf(“%s”,**++pp+2);
return 0;

Correct Answer
BBB
*str is a array pointer of string, **sptr is array pointer(double pointer) that is pointing to str strings in reverse order. ***pp also a pointer that is pointing sptr base address.
++pp will point to 1st index of sptr that contain str+2 (“CCCCC”).
in printf(“%s”,**++pp+2); ++pp will point to str+1, and **++pp, value stored @ str+1 (“BBBBB).
and (**++pp)+2 will point the 2nd index of “BBBBB”, hence BBB will print.

4)
#include
char* strFun(void)
{
char *str=”IncludeHelp”;
return str;
}
int main()
{
char *x;
x=strFun();
printf(“str value = %s”,x);
return 0;
}

Correct Answer
str value= IncludeHelp

5)
If the address of pointer ptr is 2000, then what will the output of following program ?
[On 32 bit compiler.]
include
int main()
{
void *ptr;
++ptr;
printf(“%u”,ptr);
return 0;
}
a)2004
b)2001
c)2000
d)ERROR
Answer
Correct Answer – 4
ERROR: Size of the type is unknown or zero.
ptr is a void pointer, and the scale factor of void pointer is unknown or zero.

6) What will be the output of following program ?

#include
int main()
{
char ch=10;
void *ptr=&ch;
printf(“%d,%d”,*(char*)ptr,++(*(char*)ptr));
return 0;
}

Answer
Correct Answer -
11,11
*(char*)ptr will return the value of ch, since we know printf evaluates right to left..
so, ++(*(char*)ptr) will increase the value to 11.

7) What will be the output of following program ?
#include
int main()
{
int a=10,b=2;
int *pa=&a,*pb=&b;
printf(“value = %d”, *pa/*pb);
return 0;
}

Correct Answer
ERROR: unexpected end of file found in comment.
The compiler is treated the operator / and * as /*, that happens to be the starting of comment.
To fix the error, use either *pa/ *pb (space between operators) or *pa/(*pb).

8)
What is meaning of following declaration?
int(*ptr[5])();

(A) ptr is pointer to function.
(B) ptr is array of pointer to function.
(C) ptr is pointer to such function which return type is array.
(D) ptr is pointer to array of function.
(E) None of these Answer
Explanation:

Here ptr is array not pointer.

9)
What is meaning of following pointer declaration?
int(*(*ptr1)())[2];

(A) ptr is pointer to function.
(B) ptr is array of pointer to function.
(C) ptr is pointer to such function which return type is pointer to an array.
(D) ptr is pointer array of function.
(E) None of these Answer
Explanation:
**

10)
What is size of generic pointer in c?

(A) 0
(B) 1
(C) 2
(D) Null
(E) Undefined Answer
Explanation:

Size of any type of pointer is 2 byte (In case of near pointer)
Note. By default all pointers are near pointer if default memory model is small.

11)
What will be output of following c code?

#include
int main(){
int *p1,**p2;
double *q1,**q2;
clrscr();
printf(“%d %d “,sizeof(p1),sizeof(p2));
printf(“%d %d”,sizeof(q1),sizeof(q2));
getch();
return 0;
}
(A) 1 2 4 8
(B) 2 4 4 8
(C) 2 4 2 4
(D) 2 2 2 2
(E) 2 2 4 4 Answer
Explanation:

Size of any type of pointer is 2 byte (In case of near pointer)

12)
What will be output if you will compile and execute the following c code?

#include
int main(){
char huge *p=(char *)0XC0563331;
char huge *q=(char *)0XC2551341;
if(p==q)
printf(“Equal”);
else if(p>q)
printf(“Greater than”);
else
printf(“Less than”);
return 0;
}
(A) Equal
(B) Greater than
(C) Less than
(D) Compiler error
(E) None of above Answer
Explanation:

As we know huge pointers compare its physical address.
Physical address of huge pointer p
Huge address: 0XC0563331
Offset address: 0×3331
Segment address: 0XC056
Physical address= Segment address * 0X10 + Offset address
=0XC056 * 0X10 +0X3331
=0XC0560 + 0X3331
=0XC3891
Physical address of huge pointer q
Huge address: 0XC2551341
Offset address: 0×1341
Segment address: 0XC255
Physical address= Segment address * 0X10 + Offset address
=0XC255 * 0X10 +0X1341
=0XC2550 + 0X1341
=0XC3891
Since both huge pointers p and q are pointing same physical address so if condition will true.

14)
What will be output if you will compile and execute the following c code?

#include
int main(){
int a=5,b=10,c=15;
int *arr[]={&a,&b,&c};
printf(“%d”,*arr[1]);
return 0;

}

(A) 5
(B) 10
(C) 15
(D) Compiler error
(E) None of above Answer
Explanation:

Array element cannot be address of auto variable. It can be address of static or extern variables.

15)
What will be output if you will compile and execute the following c code?

#include
int main(){
int a[2][4]={3,6,9,12,15,18,21,24};
printf(“%d %d %d”,*(a[1]+2),*(*(a+1)+2),2[1[a]]);
return 0;
}

(A) 15 18 21
(B) 21 21 21
(C) 24 24 24
(D) Compiler error
(E) None of above Answer
Explanation:

In c,
a [1][2]=*(a [1] +2)=*(*(a+1) +2)=2[a [1]]=2[1[a]]
Now, a [1] [2] means 1*(4) +2=6th element of an array staring from zero i.e. 21.

16)
What will be output if you will compile and execute the following c code?

#include
int main(){
const int x=25;
int * const p=&x;
*p=2*x;
printf(“%d”,x);
return 0;
}

(A) 25
(B) 50
(C) 0
(D) Compiler error
(E) None of above Answer
Explanation:

const keyword in c doesn’t make any variable as constant but it only makes the variable as read only. With the help of pointer we can modify the const variable. In this example pointer p is pointing to address of variable x. In the following line:
int * const p=&x;
p is constant pointer while content of p i.e. *p is not constant.
*p=2*x put the value 50 at the memory location of variable x.

17)
What will be output if you will compile and execute the following c code?

#include
int main(){
static char *s[3]={“math”,”phy”,”che”};
typedef char *( *ppp)[3];
static ppp p1=&s,p2=&s,p3=&s;
char * (*(*array[3]))[3]={&p1,&p2,&p3};
char * (*(*(*ptr)[3]))[3]=&array;
p2+=1;
p3+=2;
printf(“%s”,(***ptr[0])[2]);
return 0;
}
(A) math
(B) phy
(C) che
(D) Compiler error
(E) None of these Answer
Explanation:

Here ptr is pointer to array of pointer to string. P1, p2, p3 are pointers to array of string. array[3] is array which contain pointer to array of string.
Note: In the above figure upper part of box represent content and lower part represent memory address. We have assumed arbitrary address.

As we know p[i]=*(p+i)
(***ptr[0])[2]=(*(***ptr+0))[2]=(***ptr)[2]
=(***(&array))[2] //ptr=&array
=(**array)[2] //From rule *&p=p
=(**(&p1))[2] //array=&p1
=(*p1)[2]
=(*&s)[2] //p1=&s
=s[2]=”che”

18)
What will be output if you will compile and execute the following c code?

#include
#include
int display();
int(*array[3])();
int(*(*ptr)[3])();
int main(){
array[0]=display;
array[1]=getch;
ptr=&array;
printf(“%d”,(**ptr)());
(*(*ptr+1))();
return 0;
}

int display(){
int x=5;
return x++;
}

(A) 5
(B) 6
(C) 0
(D) Compiler error
(E) None of these Answer
Explanation:

In this example:
array []: It is array of pointer to such function which parameter is void and return type is int data type.
ptr: It is pointer to array which contents are pointer to such function which parameter is void and return type is int type data.

(**ptr)() = (** (&array)) () //ptr=&array
= (*array) () // from rule *&p=p
=array [0] () //from rule *(p+i)=p[i]
=display () //array[0]=display
(*(*ptr+1))() =(*(*&array+1))() //ptr=&array
=*(array+1) () // from rule *&p=p
=array [1] () //from rule *(p+i)=p[i]
=getch () //array[1]=getch

19)
What will be output if you will compile and execute the following c code?

#include
int main(){
int i;
char far *ptr=(char *)0XB8000000;
*ptr=’A’;
*(ptr+1)=1;
*(ptr+2)=’B’;
*(ptr+3)=2;
*(ptr+4)=’C’;
*(ptr+5)=4;
return 0;
}
Output:
It output will be A, B and C in blue, green and red color respectively.

21)
What will be output if you will compile and execute the following c code?
#include
#include
int main(){
int j;
union REGS i,o;
char far *ptr=(char *)0XA0000000;
i.h.ah=0;
i.h.al=0×13;
int86(0×10,&i,&o);
for(j=1;j<=100;j++){
*(ptr+j)=4;
}
return 0;
}

22)
What will be output if you will compile and execute the following c code?

#include
int dynamic(int,…);
int main(){
int x,y;
x=dynamic(2,4,6,8,10,12,14);
y=dynamic(3,6,9,12);
printf(“%d %d “,x,y);
return 0;
}

int dynamic(int s,…){
void *ptr;
ptr=…;
(int *)ptr+=2;
s=*(int *)ptr;
return s;
}

(A) 8 12
(B) 14 12
(C) 2 3
(D) Compiler error
(E) None of these Answer
Explanation:

In c three continuous dots is known as ellipsis which is variable number of arguments of function. In this example ptr is generic pointer which is pointing to first element of variable number of argument. After incrementing it will point third element.

24)
Which of the following is not correct pointer declaration?

(i)int * const * ptr
(ii)int const * const * ptr;
(iii)const int ** const ptr;
(iv)const int const **ptr;
(v)int const ** const ptr;

(A) All are collect.
(B) Only (ii) is incorrect.
(C) Only (iv) is incorrect.
(D) Both (iii) and (v) are incorrect.
(E) All are incorrect

25)
What will be output if you will compile and execute the following c code?

#include
int main(){
char arr[]=”C Question Bank”;
float *fptr;
fptr=(float *)arr;
fptr++;
printf(“%s”,fptr);
return 0;
}

(A) C Question Bank
(B) Question Bank
(C) Bank
(D) estion Bank
(E) Compilation error Answer

26.)

In the following declaration ptr is
far * near * huge * ptr;

(A) Near pointer.
(B) Far pointer.
(C) Huge pointer.
(D) Near and far pointer.
(E) Near,far and huge pointer. Answer

27)

What will be output if you will compile and execute the following c code?

#include
int main(){
char arr[]=”C Question Bank”;
char *p;
p+=3;
p=arr;
p+=3;
*p=100;
printf(“%s”,arr);
return 0;
}

(A) C question Bank
(B) C quesdion Bank
(C) C qdestion Bank
(D) C q100estion Bank
(E) Compilation error Answer

28)
Which of the following ptr is not pointer?
(A) int(*ptr)()
(B) long **volatile*ptr
(C) int(*ptr[2])[3]
(D) float * (*ptr)[5]
(E) All are pointer Answer

29)
Which of the following is incorrect c statement?

(A) We can increment array pointer
(B) We can increment function pointer
(C) We can increment structure pointer
(D) We can increment union pointer.
(E) We can increment generic pointer. Answer

30.
Which of the following incorrect about far pointer?

(i)Size of far pointer is four byte.
(ii)Far pointer can points all segment of residence memory
(iii)If we will increment far pointer it can move from one segment to another segment.
Choose correct option:

(A) Only (i) is incorrect.
(B) Only (ii) is incorrect.
(C) Only (iii) is incorrect.
(D) Both (ii) and (iii) are incorrect.
(E) All three are incorrect.

Posted in Uncategorized | Leave a comment

2G, 3G, 4G, 4G LTE, 5G

2G, 3G, 4G, 4G LTE, 5G – What are They?
Quite simply, the “G” stands for Generation, as in the next generation of wireless technologies. Each generation is supposedly faster, more secure and more reliable. The reliability factor is the hardest obstacle to overcome. 1G was not used to identify wireless technology until 2G, or the second generation, was released. That was a major jump in the technology when the wireless networks went from analog to digital. It’s all uphill from there. 3G came along and offered faster data transfer speeds, at least 200 kilobits per second, for multi-media use and was a long time standard for wireless transmissions regardless of what you heard on all those commercials.
It is still a challenge to get a true 4G connection, which promises upwards of a 1Gps, Gigabit per second, transfer rate if you are standing still and in the perfect spot. 4G LTE comes very close to closing this gap. True 4G on a wide spread basis may not be available until the next generation arrives. 5G?
What are the Standards of the G’s
Each of the Generations has standards that must be met to officially use the G terminology. Those standards are set by, you know, those people that set standards. The standards themselves are quite confusing but the advertisers sure know how to manipulate them. I will try to simplify the terms a bit.
1G – A term never widely used until 2G was available. This was the first generation of cell phone technology. Simple phone calls were all it was able to do.
2G – The second generation of cell phone transmission. A few more features were added to the menu such as simple text messaging.
3G – This generation set the standards for most of the wireless technology we have come to know and love. Web browsing, email, video downloading, picture sharing and other Smartphone technology were introduced in the third generation. 3G should be capable of handling around 2 Megabits per second.

4G – The speed and standards of this technology of wireless needs to be at least 100 Megabits per second and up to 1 Gigabit per second to pass as 4G. It also needs to share the network resources to support more simultaneous connections on the cell. As it develops, 4G could surpass the speed of the average wireless broadband home Internet connection. Few devices were capable of the full throttle when the technology was first released. Coverage of true 4G was limited to large metropolitan areas. Outside of the covered areas, 4G phones regressed to the 3G standards. When 4G first became available, it was simply a little faster than 3G. 4G is not the same as 4G LTE which is very close to meeting the criteria of the standards.
The major wireless networks were not actually lying to anyone when 4G first rolled out, they simply stretched the truth a bit. A 4G phone had to comply with the standards but finding the network resources to fulfill the true standard was difficult. You were buying 4G capable devices before the networks were capable of delivering true 4G to the device. Your brain knows that 4G is faster than 3G so you pay the price for the extra speed. Marketing 101. The same will probably be true when 5G hits the markets.
4G LTE– Long Term Evolution – LTE sounds better. This buzzword is a version of 4G that is fast becoming the latest advertised technology and is getting very close to the speeds needed as the standards are set. When you start hearing about LTE Advanced, then we will be talking about true fourth generation wireless technologies because they are the only two formats realized by the International Telecommunications Union as True 4G at this time. But forget about that because 5G is coming soon to a phone near you. Then there is XLTE which is a bandwidth charger with a minimum of double the bandwidth of 4G LTE and is available anywhere the AWS spectrum is initiated.

Posted in Uncategorized | Leave a comment

A simple command line tool(TCL)

Tcl is a very simple programming language. If you have programmed before, you can learn enough to write interesting Tcl programs within a few hours. This page provides a quick overview of the main features of Tcl. After reading this you’ll probably be able to start writing simple Tcl scripts on your own; however, we recommend that you consult one of the many available Tcl books for more complete information.

Basic syntax
Tcl scripts are made up of commands separated by newlines or semicolons. Commands all have the same basic form illustrated by the following example:

expr 20 + 10
This command computes the sum of 20 and 10 and returns the result, 30. You can try out this example and all the others in this page by typing them to a Tcl application such as tclsh; after a command completes, tclsh prints its result.
Each Tcl command consists of one or more words separated by spaces. In this example there are four words: expr, 20, +, and 10. The first word is the name of a command and the other words are arguments to that command. All Tcl commands consist of words, but different commands treat their arguments differently. The expr command treats all of its arguments together as an arithmetic expression, computes the result of that expression, and returns the result as a string. In the expr command the division into words isn’t significant: you could just as easily have invoked the same command as

expr 20+10
However, for most commands the word structure is important, with each word used for a distinct purpose.
All Tcl commands return results. If a command has no meaningful result then it returns an empty string as its result.

Variables
Tcl allows you to store values in variables and use the values later in commands. The set command is used to write and read variables. For example, the following command modifies the variable x to hold the value 32:

set x 32
The command returns the new value of the variable. You can read the value of a variable by invoking set with only a single argument:
set x
You don’t need to declare variables in Tcl: a variable is created automatically the first time it is set. Tcl variables don’t have types: any variable can hold any value.
To use the value of a variable in a command, use variable substitution as in the following example:

expr $x*3
When a $ appears in a command, Tcl treats the letters and digits following it as a variable name, and substitutes the value of the variable in place of the name. In this example, the actual argument received by the expr command will be 32*3 (assuming that variable x was set as in the previous example). You can use variable substitution in any word of any command, or even multiple times within a word:
set cmd expr
set x 11
$cmd $x*$x
Command substitution
You can also use the result of one command in an argument to another command. This is called command substitution:

set a 44
set b [expr $a*4]
When a [ appears in a command, Tcl treats everything between it and the matching ] as a nested Tcl command. Tcl evaluates the nested command and substitutes its result into the enclosing command in place of the bracketed text. In the example above the second argument of the second set command will be 176.
Quotes and braces
Double-quotes allow you to specify words that contain spaces. For example, consider the following script:

set x 24
set y 18
set z “$x + $y is [expr $x + $y]”
After these three commands are evaluated variable z will have the value 24 + 18 is 42. Everything between the quotes is passed to the set command as a single word. Note that (a) command and variable substitutions are performed on the text between the quotes, and (b) the quotes themselves are not passed to the command. If the quotes were not present, the set command would have received 6 arguments, which would have caused an error.
Curly braces provide another way of grouping information into words. They are different from quotes in that no substitutions are performed on the text between the curly braces:

set z {$x + $y is [expr $x + $y]}
This command sets variable z to the value “$x + $y is [expr $x + $y]“.
Control structures
Tcl provides a complete set of control structures including commands for conditional execution, looping, and procedures. Tcl control structures are just commands that take Tcl scripts as arguments. The example below creates a Tcl procedure called power, which raises a base to an integer power:

proc power {base p} {
set result 1
while {$p > 0} {
set result [expr $result * $base]
set p [expr $p - 1]
}
return $result
}
This script consists of a single command, proc. The proc command takes three arguments: the name of a procedure, a list of argument names, and the body of the procedure, which is a Tcl script. Note that everything between the curly brace at the end of the first line and the curly brace on the last line is passed verbatim to proc as a single argument. The proc command creates a new Tcl command named power that takes two arguments. You can then invoke power with commands like the following:
power 2 6
power 1.15 5
When power is invoked, the procedure body is evaluated. While the body is executing it can access its arguments as variables: base will hold the first argument and p will hold the second.

The body of the power procedure contains three Tcl commands: set, while, and return. The while command does most of the work of the procedure. It takes two arguments, an expression ($p > 0) and a body, which is another Tcl script. The while command evaluates its expression argument using rules similar to those of the C programming language and if the result is true (nonzero) then it evaluates the body as a Tcl script. It repeats this process over and over until eventually the expression evaluates to false (zero). In this case the body of the while command multiplied the result value by base and then decrements p. When p reaches zero the result contains the desired power of base. The return command causes the procedure to exit with the value of variable result as the procedure’s result.

Where do commands come from?
As you have seen, all of the interesting features in Tcl are represented by commands. Statements are commands, expressions are evaluated by executing commands, control structures are commands, and procedures are commands.

Tcl commands are created in three ways. One group of commands is provided by the Tcl interpreter itself. These commands are called builtin commands. They include all of the commands you have seen so far and many more (see below). The builtin commands are present in all Tcl applications.

The second group of commands is created using the Tcl extension mechanism. Tcl provides APIs that allow you to create a new command by writing a command procedure in C or C++ that implements the command. You then register the command procedure with the Tcl interpreter by telling Tcl the name of the command that the procedure implements. In the future, whenever that particular name is used for a Tcl command, Tcl will call your command procedure to execute the command. The builtin commands are also implemented using this same extension mechanism; their command procedures are simply part of the Tcl library.

When Tcl is used inside an application, the application incorporates its key features into Tcl using the extension mechanism. Thus the set of available Tcl commands varies from application to application. There are also numerous extension packages that can be incorporated into any Tcl application. One of the best known extensions is Tk, which provides powerful facilities for building graphical user interfaces. Other extensions provide object-oriented programming, database access, more graphical capabilities, and a variety of other features. One of Tcl’s greatest advantages for building integration applications is the ease with which it can be extended to incorporate new features or communicate with other resources.

The third group of commands consists of procedures created with the proc command, such as the power command created above. Typically, extensions are used for lower-level functions where C programming is convenient, and procedures are used for higher-level functions where it is easier to write in Tcl.

Other features
Tcl contains many other commands besides the ones used in the preceding examples. Here is a sampler of some of the features provided by the builtin Tcl commands:

More control structures, such as if, for, foreach, and switch.
String manipulation, including a powerful regular expression matching facility. Arbitrary-length strings can be passed around and manipulated just as easily as numbers.
I/O, including files on disk, network sockets, and devices such as serial ports. Tcl provides particularly simple facilities for socket communication over the Internet.
File management: Tcl provides several commands for manipulating file names, reading and writing file attributes, copying files, deleting files, creating directories, and so on.
Subprocess invocation: you can run other applications with the exec command and communicate with them while they run.
Lists: Tcl makes it easy to create collections of values (lists) and manipulate them in a variety of ways.
Arrays: you can create structured values consisting of name-value pairs with arbitrary string values for the names and values.
Time and date manipulation.
Events: Tcl allows scripts to wait for certain events to occur, such as an elapsed time or the availability of input data on a network socket.
Examples
A simple command line tool
A simple network server

Posted in Uncategorized | Leave a comment

Embedded Wireless with Lora

Microchip’s Long-Range Low-Power End Node Solution

With the growing Internet of Things, Microchip has a LoRa® technology wireless solution to address increasing demands on end-devices for long range connectivity, low-power for battery operation, and low infrastructure cost for volume deployment.
Microchip’s LoRa technology solution is ready to run out-of-the box and with the complete LoRaWAN protocol and certifications in place, it reduces time to market and saves development costs.
LoRa Technology is ideal for battery-operated sensor and low-power applications such as:

Internet of Things
Smart agriculture
Smart city
Sensor networks
Industrial automation
Smart meters
Asset tracking
Smart home
M2M
LoRa Key Features

LoRa Technology:
Long range – greater than 15 km
High capacity of up to 1 million nodes
Long battery life – over 10 years
Reduced synchronization overhead and no hops in mesh network
Secured and efficient network
Interference immunity
Microchip LoRa Technology Module:
Embedded LoRaWAN Protocol Class A – easily connects to LoRa Technology gateway
LoRaWAN Protocol Stack ready in system
Simple ASCII command set
Full certification by region
LoRa Technology for Long Range Connectivity.
Real world example: Deployment of 7 LoRa technology gateways creates IoT network coverage for most of Munich!

Posted in Uncategorized | Leave a comment

LoRaWAN Technology

LoRaWAN™ is a Low Power Wide Area Network (LPWAN) specification intended for wireless battery operated Things in a regional, national or global network. LoRaWAN targets key requirements of Internet of Things such as secure bi-directional communication, mobility and localization services. The LoRaWAN specification provides seamless interoperability among smart Things without the need of complex local installations and gives back the freedom to the user, developer, businesses enabling the roll out of Internet of Things.
LoRaWAN network architecture is typically laid out in a star-of-stars topology in which gateways is a transparent bridge relaying messages between end-devices and a central network server in the backend. Gateways are connected to the network server via standard IP connections while end-devices use single-hop wireless communication to one or many gateways. All end-point communication is generally bi-directional, but also supports operation such as multicast enabling software upgrade over the air or other mass distribution messages to reduce the on air communication time.
Communication between end-devices and gateways is spread out on different frequency channels and data rates. The selection of the data rate is a trade-off between communication range and message duration. Due to the spread spectrum technology, communications with different data rates do not interfere with each other and create a set of “virtual” channels increasing the capacity of the gateway. LoRaWAN data rates range from 0.3 kbps to 50 kbps. To maximize both battery life of the end-devices and overall network capacity, the LoRaWAN network server is managing the data rate and RF output for each end-device individually by means of an adaptive data rate (ADR) scheme.
National wide networks targeting internet of things such as critical infrastructure, confidential personal data or critical functions for the society has a special need for secure communication. This has been solved by several layer of encryption:
Unique Network key (EUI64) and ensure security on network level
Unique Application key (EUI64) ensure end to end security on application level
Device specific key (EUI128)
LoRaWAN has several different classes of end-point devices to address the different needs reflected in the wide range of applications:
Bi-directional end-devices (Class A): End-devices of Class A allow for bi-directional communications whereby each end-device’s uplink transmission is followed by two short downlink receive windows. The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis (ALOHA-type of protocol). This Class A operation is the lowest power end-device system for applications that only require downlink communication from the server shortly after the end-device has sent an uplink transmission. Downlink communications from the server at any other time will have to wait until the next scheduled uplink.
Bi-directional end-devices with scheduled receive slots (Class B): In addition to the Class A random receive windows, Class B devices open extra receive windows at scheduled times. In order for the End-device to open its receive window at the scheduled time it receives a time synchronized Beacon from the gateway. This allows the server to know when the end-device is listening.
Bi-directional end-devices with maximal receive slots (Class C): End-devices of Class C have nearly continuously open receive windows, only closed when transmitting. Class C

Posted in Uncategorized | Leave a comment

GSM – Overview

What is GSM?
If you are in Europe or Asia and using a mobile phone, then most probably you are using GSM technology in your mobile phone.

GSM stands for Global System for Mobile Communication. It is a digital cellular technology used for transmitting mobile voice and data services.

The concept of GSM emerged from a cell-based mobile radio system at Bell Laboratories in the early 1970s.

GSM is the name of a standardization group established in 1982 to create a common European mobile telephone standard.

GSM is the most widely accepted standard in telecommunications and it is implemented globally.

GSM is a circuit-switched system that divides each 200 kHz channel into eight 25 kHz time-slots. GSM operates on the mobile communication bands 900 MHz and 1800 MHz in most parts of the world. In the US, GSM operates in the bands 850 MHz and 1900 MHz.

GSM owns a market share of more than 70 percent of the world’s digital cellular subscribers.

GSM makes use of narrowband Time Division Multiple Access (TDMA) technique for transmitting signals.

GSM was developed using digital technology. It has an ability to carry 64 kbps to 120 Mbps of data rates.

Presently GSM supports more than one billion mobile subscribers in more than 210 countries throughout the world.

GSM provides basic to advanced voice and data services including roaming service. Roaming is the ability to use your GSM phone number in another GSM network.

GSM digitizes and compresses data, then sends it down through a channel with two other streams of user data, each in its own timeslot.

Why GSM?
Listed below are the features of GSM that account for its popularity and wide acceptance.

Improved spectrum efficiency

International roaming

Low-cost mobile sets and base stations (BSs)

High-quality speech

Compatibility with Integrated Services Digital Network (ISDN) and other telephone company services

Support for new services

GSM History
The following table shows some of the important events in the rollout of the GSM system.

Years Events
1982 Conference of European Posts and Telegraph (CEPT) establishes a GSM group to widen the standards for a pan-European cellular mobile system.
1985 A list of recommendations to be generated by the group is accepted.
1986 Executed field tests to check the different radio techniques recommended for the air interface.
1987 Time Division Multiple Access (TDMA) is chosen as the access method (with Frequency Division Multiple Access [FDMA]). The initial Memorandum of Understanding (MoU) is signed by telecommunication operators representing 12 countries.
1988 GSM system is validated.
1989 The European Telecommunications Standards Institute (ETSI) was given the responsibility of the GSM specifications.
1990 Phase 1 of the GSM specifications is delivered.
1991 Commercial launch of the GSM service occurs. The DCS1800 specifications are finalized.
1992 The addition of the countries that signed the GSM MoU takes place. Coverage spreads to larger cities and airports.
1993 Coverage of main roads GSM services starts outside Europe.
1994 Data transmission capabilities launched. The number of networks rises to 69 in 43 countries by the end of 1994.
1995 Phase 2 of the GSM specifications occurs. Coverage is extended to rural areas.
1996 June: 133 network in 81 countries operational.
1997 July: 200 network in 109 countries operational, around 44 million subscribers worldwide.
1999 Wireless Application Protocol (WAP) came into existence and became operational in 130 countries with 260 million subscribers.
2000 General Packet Radio Service(GPRS) came into existence.
2001 As of May 2001, over 550 million people were subscribers to mobile telecommunications.
GSM – Architecture
A GSM network comprises of many functional units. These functions and interfaces are explained in this chapter. The GSM network can be broadly divided into:

The Mobile Station (MS)

The Base Station Subsystem (BSS)

The Network Switching Subsystem (NSS)

The Operation Support Subsystem (OSS)

Given below is a simple pictorial view of the GSM architecture.

GSM Architecture
The additional components of the GSM architecture comprise of databases and messaging systems functions:

Home Location Register (HLR)
Visitor Location Register (VLR)
Equipment Identity Register (EIR)
Authentication Center (AuC)
SMS Serving Center (SMS SC)
Gateway MSC (GMSC)
Chargeback Center (CBC)
Transcoder and Adaptation Unit (TRAU)
The following diagram shows the GSM network along with the added elements:

GSM Elements
The MS and the BSS communicate across the Um interface. It is also known as the air interface or the radio link. The BSS communicates with the Network Service Switching (NSS) center across the A interface.

GSM network areas
In a GSM network, the following areas are defined:

Cell : Cell is the basic service area; one BTS covers one cell. Each cell is given a Cell Global Identity (CGI), a number that uniquely identifies the cell.

Location Area : A group of cells form a Location Area (LA). This is the area that is paged when a subscriber gets an incoming call. Each LA is assigned a Location Area Identity (LAI). Each LA is served by one or more BSCs.

MSC/VLR Service Area : The area covered by one MSC is called the MSC/VLR service area.

PLMN : The area covered by one network operator is called the Public Land Mobile Network (PLMN). A PLMN can contain one or more MSCs.

GSM – Specification
The requirements for different Personal Communication Services (PCS) systems differ for each PCS network. Vital characteristics of the GSM specification are listed below:

Modulation
Modulation is the process of transforming the input data into a suitable format for the transmission medium. The transmitted data is demodulated back to its original form at the receiving end. The GSM uses Gaussian Minimum Shift Keying (GMSK) modulation method.

Access Methods
Radio spectrum being a limited resource that is consumed and divided among all the users, GSM devised a combination of TDMA/FDMA as the method to divide the bandwidth among the users. In this process, the FDMA part divides the frequency of the total 25 MHz bandwidth into 124 carrier frequencies of 200 kHz bandwidth.

Each BS is assigned with one or multiple frequencies, and each of this frequency is divided into eight timeslots using a TDMA scheme. Each of these slots are used for both transmission as well as reception of data. These slots are separated by time so that a mobile unit doesn’t transmit and receive data at the same time.

Transmission Rate
The total symbol rate for GSM at 1 bit per symbol in GMSK produces 270.833 K symbols/second. The gross transmission rate of a timeslot is 22.8 Kbps.

GSM is a digital system with an over-the-air bit rate of 270 kbps.

Frequency Band
The uplink frequency range specified for GSM is 933 – 960 MHz (basic 900 MHz band only). The downlink frequency band 890 – 915 MHz (basic 900 MHz band only).

Channel Spacing
Channel spacing indicates the spacing between adjacent carrier frequencies. For GSM, it is 200 kHz.

Speech Coding
For speech coding or processing, GSM uses Linear Predictive Coding (LPC). This tool compresses the bit rate and gives an estimate of the speech parameters. When the audio signal passes through a filter, it mimics the vocal tract. Here, the speech is encoded at 13 kbps.

Duplex Distance
Duplex distance is the space between the uplink and downlink frequencies. The duplex distance for GSM is 80 MHz, where each channel has two frequencies that are 80 MHz apart.

Misc
Frame duration : 4.615 mS

Duplex Technique : Frequency Division Duplexing (FDD) access mode previously known as WCDMA.

Speech channels per RF channel : 8.

GSM – Addresses and Identifiers
GSM treats the users and the equipment in different ways. Phone numbers, subscribers, and equipment identifiers are some of the known ones. There are many other identifiers that have been well-defined, which are required for the subscriber’s mobility management and for addressing the remaining network elements. Vital addresses and identifiers that are used in GSM are addressed below.

International Mobile Station Equipment Identity (IMEI)
The International Mobile Station Equipment Identity (IMEI) looks more like a serial number which distinctively identifies a mobile station internationally. This is allocated by the equipment manufacturer and registered by the network operator, who stores it in the Entrepreneurs-in-Residence (EIR). By means of IMEI, one recognizes obsolete, stolen, or non-functional equipment.

Following are the parts of IMEI:

Type Approval Code (TAC) : 6 decimal places, centrally assigned.

Final Assembly Code (FAC) : 6 decimal places, assigned by the manufacturer.

Serial Number (SNR) : 6 decimal places, assigned by the manufacturer.

Spare (SP) : 1 decimal place.

Thus, IMEI = TAC + FAC + SNR + SP. It uniquely characterizes a mobile station and gives clues about the manufacturer and the date of manufacturing.

International Mobile Subscriber Identity (IMSI)
Every registered user has an original International Mobile Subscriber Identity (IMSI) with a valid IMEI stored in their Subscriber Identity Module (SIM).

IMSI comprises of the following parts:

Mobile Country Code (MCC) : 3 decimal places, internationally standardized.

Mobile Network Code (MNC) : 2 decimal places, for unique identification of mobile network within the country.

Mobile Subscriber Identification Number (MSIN) : Maximum 10 decimal places, identification number of the subscriber in the home mobile network.

Mobile Subscriber ISDN Number (MSISDN)
The authentic telephone number of a mobile station is the Mobile Subscriber ISDN Number (MSISDN). Based on the SIM, a mobile station can have many MSISDNs, as each subscriber is assigned with a separate MSISDN to their SIM respectively.

Listed below is the structure followed by MSISDN categories, as they are defined based on international ISDN number plan:

Country Code (CC) : Up to 3 decimal places.

National Destination Code (NDC) : Typically 2-3 decimal places.

Subscriber Number (SN) : Maximum 10 decimal places.

Mobile Station Roaming Number (MSRN)
Mobile Station Roaming Number (MSRN) is an interim location dependent ISDN number, assigned to a mobile station by a regionally responsible Visitor Location Register (VLA). Using MSRN, the incoming calls are channelled to the MS.

The MSRN has the same structure as the MSISDN.

Country Code (CC) : of the visited network.

National Destination Code (NDC) : of the visited network.

Subscriber Number (SN) : in the current mobile network.

Location Area Identity (LAI)
Within a PLMN, a Location Area identifies its own authentic Location Area Identity (LAI). The LAI hierarchy is based on international standard and structured in a unique format as mentioned below:

Country Code (CC) : 3 decimal places.

Mobile Network Code (MNC) : 2 decimal places.

Location Area Code (LAC) : maximum 5 decimal places or maximum twice 8 bits coded in hexadecimal (LAC < FFFF).

Temporary Mobile Subscriber Identity (TMSI)
Temporary Mobile Subscriber Identity (TMSI) can be assigned by the VLR, which is responsible for the current location of a subscriber. The TMSI needs to have only local significance in the area handled by the VLR. This is stored on the network side only in the VLR and is not passed to the Home Location Register (HLR).

Together with the current location area, the TMSI identifies a subscriber uniquely. It can contain up to 4 × 8 bits.

Local Mobile Subscriber Identity (LMSI)
Each mobile station can be assigned with a Local Mobile Subscriber Identity (LMSI), which is an original key, by the VLR. This key can be used as the auxiliary searching key for each mobile station within its region. It can also help accelerate the database access. An LMSI is assigned if the mobile station is registered with the VLR and sent to the HLR. LMSI comprises of four octets (4×8 bits).

Cell Identifier (CI)
Using a Cell Identifier (CI) (maximum 2 × 8) bits, the individual cells that are within an LA can be recognized. When the Global Cell Identity (LAI + CI) calls are combined, then it is uniquely defined.

GSM – Operations
Once a Mobile Station initiates a call, a series of events takes place. Analyzing these events can give an insight into the operation of the GSM system.

Mobile Phone to Public Switched Telephone Network (PSTN)
When a mobile subscriber makes a call to a PSTN telephone subscriber, the following sequence of events takes place:

The MSC/VLR receives the message of a call request.

The MSC/VLR checks if the mobile station is authorized to access the network. If so, the mobile station is activated. If the mobile station is not authorized, then the service will be denied.

MSC/VLR analyzes the number and initiates a call setup with the PSTN.

MSC/VLR asks the corresponding BSC to allocate a traffic channel (a radio channel and a time slot).

The BSC allocates the traffic channel and passes the information to the mobile station.

The called party answers the call and the conversation takes place.

The mobile station keeps on taking measurements of the radio channels in the present cell and the neighbouring cells and passes the information to the BSC. The BSC decides if a handover is required. If so, a new traffic channel is allocated to the mobile station and the handover takes place. If handover is not required, the mobile station continues to transmit in the same frequency.

PSTN to Mobile Phone
When a PSTN subscriber calls a mobile station, the following sequence of events takes place:

The Gateway MSC receives the call and queries the HLR for the information needed to route the call to the serving MSC/VLR.

The GMSC routes the call to the MSC/VLR.

The MSC checks the VLR for the location area of the MS.

The MSC contacts the MS via the BSC through a broadcast message, that is, through a paging request.

The MS responds to the page request.

The BSC allocates a traffic channel and sends a message to the MS to tune to the channel. The MS generates a ringing signal and, after the subscriber answers, the speech connection is established.

Handover, if required, takes place, as discussed in the earlier case.

To transmit the speech over the radio channel in the stipulated time, the MS codes it at the rate of 13 Kbps. The BSC transcodes the speech to 64 Kbps and sends it over a land link or a radio link to the MSC. The MSC then forwards the speech data to the PSTN. In the reverse direction, the speech is received at 64 Kbps at the BSC and the BSC transcodes it to 13 Kbps for radio transmission.

GSM supports 9.6 Kbps data that can be channelled in one TDMA timeslot. To supply higher data rates, many enhancements were done to the GSM standards (GSM Phase 2 and GSM Phase 2+).

GSM – Protocol Stack
GSM architecture is a layered model that is designed to allow communications between two different systems. The lower layers assure the services of the upper-layer protocols. Each layer passes suitable notifications to ensure the transmitted data has been formatted, transmitted, and received accurately.

The GMS protocol stacks diagram is shown below:

GSM Protocol Stack
MS Protocols
Based on the interface, the GSM signaling protocol is assembled into three general layers:

Layer 1 : The physical layer. It uses the channel structures over the air interface.

Layer 2 : The data-link layer. Across the Um interface, the data-link layer is a modified version of the Link access protocol for the D channel (LAP-D) protocol used in ISDN, called Link access protocol on the Dm channel (LAP-Dm). Across the A interface, the Message Transfer Part (MTP), Layer 2 of SS7 is used.

Layer 3 : GSM signalling protocol’s third layer is divided into three sublayers:

Radio Resource Management (RR),
Mobility Management (MM), and
Connection Management (CM).
MS to BTS Protocols
The RR layer is the lower layer that manages a link, both radio and fixed, between the MS and the MSC. For this formation, the main components involved are the MS, BSS, and MSC. The responsibility of the RR layer is to manage the RR-session, the time when a mobile is in a dedicated mode, and the radio channels including the allocation of dedicated channels.

The MM layer is stacked above the RR layer. It handles the functions that arise from the mobility of the subscriber, as well as the authentication and security aspects. Location management is concerned with the procedures that enable the system to know the current location of a powered-on MS so that incoming call routing can be completed.

The CM layer is the topmost layer of the GSM protocol stack. This layer is responsible for Call Control, Supplementary Service Management, and Short Message Service Management. Each of these services are treated as individual layer within the CM layer. Other functions of the CC sublayer include call establishment, selection of the type of service (including alternating between services during a call), and call release.

BSC Protocols
The BSC uses a different set of protocols after receiving the data from the BTS. The Abis interface is used between the BTS and BSC. At this level, the radio resources at the lower portion of Layer 3 are changed from the RR to the Base Transceiver Station Management (BTSM). The BTS management layer is a relay function at the BTS to the BSC.

The RR protocols are responsible for the allocation and reallocation of traffic channels between the MS and the BTS. These services include controlling the initial access to the system, paging for MT calls, the handover of calls between cell sites, power control, and call termination. The BSC still has some radio resource management in place for the frequency coordination, frequency allocation, and the management of the overall network layer for the Layer 2 interfaces.

To transit from the BSC to the MSC, the BSS mobile application part or the direct application part is used, and SS7 protocols is applied by the relay, so that the MTP 1-3 can be used as the prime architecture.

MSC Protocols
At the MSC, starting from the BSC, the information is mapped across the A interface to the MTP Layers 1 through 3. Here, Base Station System Management Application Part (BSS MAP) is said to be the equivalent set of radio resources. The relay process is finished by the layers that are stacked on top of Layer 3 protocols, they are BSS MAP/DTAP, MM, and CM. This completes the relay process. To find and connect to the users across the network, MSCs interact using the control-signalling network. Location registers are included in the MSC databases to assist in the role of determining how and whether connections are to be made to roaming users.

Each GSM MS user is given a HLR that in turn comprises of the user’s location and subscribed services. VLR is a separate register that is used to track the location of a user. When the users move out of the HLR covered area, the VLR is notified by the MS to find the location of the user. The VLR in turn, with the help of the control network, signals the HLR of the MS’s new location. With the help of location information contained in the user’s HLR, the MT calls can be routed to the user.

GSM – User Services
GSM offers much more than just voice telephony. Contact your local GSM network operator to the specific services that you can avail.

GSM offers three basic types of services:

Telephony services or teleservices
Data services or bearer services
Supplementary services
Teleservices
The abilities of a Bearer Service are used by a Teleservice to transport data. These services are further transited in the following ways:

Voice Calls
The most basic Teleservice supported by GSM is telephony. This includes full-rate speech at 13 kbps and emergency calls, where the nearest emergency-service provider is notified by dialing three digits.

Videotext and Facsmile
Another group of teleservices includes Videotext access, Teletex transmission, Facsimile alternate speech and facsimile Group 3, Automatic facsimile Group, 3 etc.

Short Text Messages
Short Messaging Service (SMS) service is a text messaging service that allows sending and receiving text messages on your GSM mobile phone. In addition to simple text messages, other text data including news, sports, financial, language, and location-based data can also be transmitted.

Bearer Services
Data services or Bearer Services are used through a GSM phone. to receive and send data is the essential building block leading to widespread mobile Internet access and mobile data transfer. GSM currently has a data transfer rate of 9.6k. New developments that will push up data transfer rates for GSM users are HSCSD (high speed circuit switched data) and GPRS (general packet radio service) are now available.

Supplementary Services
Supplementary services are additional services that are provided in addition to teleservices and bearer services. These services include caller identification, call forwarding, call waiting, multi-party conversations, and barring of outgoing (international) calls, among others. A brief description of supplementary services is given here:

Conferencing : It allows a mobile subscriber to establish a multiparty conversation, i.e., a simultaneous conversation between three or more subscribers to setup a conference call. This service is only applicable to normal telephony.

Call Waiting : This service notifies a mobile subscriber of an incoming call during a conversation. The subscriber can answer, reject, or ignore the incoming call.

Call Hold : This service allows a subscriber to put an incoming call on hold and resume after a while. The call hold service is applicable to normal telephony.

Call Forwarding : Call Forwarding is used to divert calls from the original recipient to another number. It is normally set up by the subscriber himself. It can be used by the subscriber to divert calls from the Mobile Station when the subscriber is not available, and so to ensure that calls are not lost.

Call Barring : Call Barring is useful to restrict certain types of outgoing calls such as ISD or stop incoming calls from undesired numbers. Call barring is a flexible service that enables the subscriber to conditionally bar calls.

Number Identification : There are following supplementary services related to number identification:

Calling Line Identification Presentation : This service displays the telephone number of the calling party on your screen.

Calling Line Identification Restriction : A person not wishing their number to be presented to others subscribes to this service.

Connected Line Identification Presentation : This service is provided to give the calling party the telephone number of the person to whom they are connected. This service is useful in situations such as forwarding's where the number connected is not the number dialled.

Connected Line Identification Restriction : There are times when the person called does not wish to have their number presented and so they would subscribe to this person. Normally, this overrides the presentation service.

Malicious Call Identification : The malicious call identification service was provided to combat the spread of obscene or annoying calls. The victim should subscribe to this service, and then they could cause known malicious calls to be identified in the GSM network, using a simple command.

Advice of Charge (AoC) : This service was designed to give the subscriber an indication of the cost of the services as they are used. Furthermore, those service providers who wish to offer rental services to subscribers without their own SIM can also utilize this service in a slightly different form. AoC for data calls is provided on the basis of time measurements.

Closed User Groups (CUGs) : This service is meant for groups of subscribers who wish to call only each other and no one else.

Unstructured supplementary services data (USSD) : This allows operator-defined individual services.

GSM – Security and Encryption
GSM is the most secured cellular telecommunications system available today. GSM has its security methods standardized. GSM maintains end-to-end security by retaining the confidentiality of calls and anonymity of the GSM subscriber.

Temporary identification numbers are assigned to the subscriber’s number to maintain the privacy of the user. The privacy of the communication is maintained by applying encryption algorithms and frequency hopping that can be enabled using digital systems and signalling.

This chapter gives an outline of the security measures implemented for GSM subscribers.

Mobile Station Authentication
The GSM network authenticates the identity of the subscriber through the use of a challenge-response mechanism. A 128-bit Random Number (RAND) is sent to the MS. The MS computes the 32-bit Signed Response (SRES) based on the encryption of the RAND with the authentication algorithm (A3) using the individual subscriber authentication key (Ki). Upon receiving the SRES from the subscriber, the GSM network repeats the calculation to verify the identity of the subscriber.

The individual subscriber authentication key (Ki) is never transmitted over the radio channel, as it is present in the subscriber's SIM, as well as the AUC, HLR, and VLR databases. If the received SRES agrees with the calculated value, the MS has been successfully authenticated and may continue. If the values do not match, the connection is terminated and an authentication failure is indicated to the MS.

The calculation of the signed response is processed within the SIM. It provides enhanced security, as confidential subscriber information such as the IMSI or the individual subscriber authentication key (Ki) is never released from the SIM during the authentication process.

Signalling and Data Confidentiality
The SIM contains the ciphering key generating algorithm (A8) that is used to produce the 64-bit ciphering key (Kc). This key is computed by applying the same random number (RAND) used in the authentication process to ciphering key generating algorithm (A8) with the individual subscriber authentication key (Ki).

GSM provides an additional level of security by having a way to change the ciphering key, making the system more resistant to eavesdropping. The ciphering key may be changed at regular intervals as required. As in case of the authentication process, the computation of the ciphering key (Kc) takes place internally within the SIM. Therefore, sensitive information such as the individual subscriber authentication key (Ki) is never revealed by the SIM.

Encrypted voice and data communications between the MS and the network is accomplished by using the ciphering algorithm A5. Encrypted communication is initiated by a ciphering mode request command from the GSM network. Upon receipt of this command, the mobile station begins encryption and decryption of data using the ciphering algorithm (A5) and the ciphering key (Kc).

Subscriber Identity Confidentiality
To ensure subscriber identity confidentiality, the Temporary Mobile Subscriber Identity (TMSI) is used. Once the authentication and encryption procedures are done, the TMSI is sent to the mobile station. After the receipt, the mobile station responds. The TMSI is valid in the location area in which it was issued. For communications outside the location area, the Location Area Identification (LAI) is necessary in addition to the TMSI.

GSM – Billing
GSM service providers are doing billing based on the services they are providing to their customers. All the parameters are simple enough to charge a customer for the provided services.

This chapter provides an overview of the frequently used billing techniques and parameters applied to charge a GSM subscriber.

Telephony Service
These services can be charged on per call basis. The call initiator has to pay the charges, and the incoming calls are nowadays free. A customer can be charged based on different parameters such as:

International call or long distance call.
Local call.
Call made during peak hours.
Call made during night time.
Discounted call during weekends.
Call per minute or per second.
Many more other criteria can be designed by a service provider to charge their customers.
SMS Service
Most of the service providers charge their customer's SMS services based on the number of text messages sent. There are other prime SMS services available where service providers charge more than normal SMS charge. These services are being availed in collaboration of Television Networks or Radio Networks to demand SMS from the audiences.

Most of the time, the charges are paid by the SMS sender but for some services like stocks and share prices, mobile banking facilities, and leisure booking services, etc. the recipient of the SMS has to pay for the service.

GPRS Services
Using GPRS service, you can browse, play games on the Internet, and download movies. So a service provider will charge you based on the data uploaded as well as data downloaded on your mobile phone. These charges will be based on per Kilo Byte data downloaded/uploaded.

Additional parameter could be a QoS provided to you. If you want to watch a movie, then a low QoS may work because some data loss may be acceptable, but if you are downloading a zip file, then a single byte loss will corrupt your complete downloaded file.

Another parameter could be peak and off peak time to download a data file or to browse the Internet.

Supplementary Services
Most of the supplementary services are being provided based on monthly rental or absolutely free. For example, call waiting, call forwarding, calling number identification, and call on hold are available at zero cost.

Call barring is a service, which service providers use just to recover their dues, etc., otherwise this service is not being used by any subscriber.

Call conferencing service is a form of simple telephone call where the customers are charged for multiple calls made at a time. No service provider charges extra charge for this service.

Closed User Group (CUG) is very popular and is mainly being used to give special discounts to the users if they are making calls to a particular defined group of subscribers.

Advice of Charge (AoC) can be charged based on number of queries made by a subscriber.

Posted in Uncategorized | Leave a comment

GSM Module Interfacing and commands

GSM Module Interfacing

GSM module is used in many communication devices which are based on GSM (Global System for Mobile Communications) technology. It is used to interact with GSM network using a computer. GSM module only understands AT commands, and can respond accordingly. The most basic command is “AT”, if GSM respond OK then it is working good otherwise it respond with “ERROR”. There are various AT commands like ATA for answer a call, ATD to dial a call, AT+CMGR to read the message, AT+CMGS to send the sms etc. AT commands should be followed by Carriage return i.e. \r (0D in hex), like “AT+CMGS\r”. We can use GSM module using these commands.

GSM Interfacing with 8051

Instead of using PC, we can use microcontrollers to interact with GSM module and LCD to get the response from GSM module. So we are going to interface GSM with a 8051 microcontroller (AT89S52). It’s very easy to interface GSM with 8051, we just need to send AT commands from microcontroller and receive response from GSM and display it on LCD. We can use microcontroller’s serial port to communicate with GSM, means using PIN 10 (RXD) and 11 (TXD).
GSM Module SIM900A

First we need to connect LCD to 8051, you can learn this from here: LCD Interfacing with 8051 Microcontroller. Then we need to connect GSM module to 8051, now here we should pay some attention. First you need to check that whether your GSM module is capable of working at TTL logic or it can only work with RS232. Basically if your module has RX and TX (with GND) Pins on board then it can work on TTL logic. And If it don’t have any RX,TX pins and only have a RS232 port (serial port with 9) then you need to use MAX232 IC to connect serial port to the microcontroller. Basically MAX232 used to convert serial data into TTL logic because Microcontroller can only work on TTL logic. But if GSM module has RX, TX pins then you don’t need to use MAX232 or any serial converter, you can directly connect RX of GSM to TX (PIN 11) of 8051 and TX of GSM to RX (PIN 10) of 8051. In our case I have used SIM900A module and it has RX, TX pins so I haven’t used MAX232.
Circuit Diagram for GSM Interfacing with 8051 Microcontroller

Circuit Diagram for GSM interfacing with AT89S52 microcontroller is shown in above figure. Now after the connection, we just need to write program to send AT commands to GSM and receive its response on LCD. There are many AT commands as described above, but our scope of this article is just to interface GSM with 8051, so we are just going to send command “AT” followed by “\r” (0D in hex). This will give us a response “OK”. But you can extend this program to use all the facilities of GSM.

Code explanation

Besides all the LCD related functions, here we have used Serial port and timer mode register (TMOD). You can learn about LCD functions and other code by going through our 8051 projects section, here I am explaining about serial communication related code functions:

GSM_init() function:

This function is use to set the Baudrate for microcontroller. Baudrate is nothing but the Bits/second transmitted or received. And we need to match the baudrate of 8051 to the Baud rate of GSM module i.e. 9600. We have used the Timer 1 in Mode 2 (8-bit auto-reload mode) by setting the TMOD register to 0X20 and Higher byte of Timer 1(TH1) to 0XFD to get the baud rate of 9600. Also SCON register is used to set the mode of serial communication, we have used Mode1 (8-bit UART) with receiving enabled.

GSM_write Function:

SBUF (serial buffer special function register) is used for serial communication, whenever we want to send any byte to serial device we put that byte in SBUF register, when the complete byte has been sent then TI bit is set by hardware. We need to reset it for sending next byte. It’s a flag that indicates that byte has been sent successfully. TI is the second bit of SCON register. We have sent “AT” using this function.

GSM_read function:

Same as sending, whenever we receive any byte from external device that byte is put in SBUF register, we just need to read it. And whenever the complete byte has been received RI bit is set by hardware. We need to reset it for receiving next byte. RI is the first bit of SCON register. We have read response “OK” using this function.

Code:

#include
#define display_port P2 //Data pins connected to port 2 on microcontroller
sbit rs = P3^2; //RS pin connected to pin 2 of port 3
sbit rw = P3^3; // RW pin connected to pin 3 of port 3
sbit e = P3^4; //E pin connected to pin 4 of port 3
int k;
unsigned char str[26];
void GSM_init() // serial port initialization
{
TMOD=0×20; // Timer 1 selected, Mode 2(8-bit auto-reload mode)
TH1=0xfd; // 9600 baudrate
SCON=0×50; // Mode 1(8-bit UART), receiving enabled
TR1=1; // Start timer
}
void msdelay(unsigned int time) // Function for creating delay in milliseconds.
{
unsigned m,n ;
for(m=0;m<time;m++)
for(n=0;n<1275;n++);
}
void lcd_cmd(unsigned char command) //Function to send command instruction to LCD
{
display_port = command;
rs= 0;
rw=0;
e=1;
msdelay(1);
e=0;
}
void lcd_data(unsigned char disp_data) //Function to send display data to LCD
{
display_port = disp_data;
rs= 1;
rw=0;
e=1;
msdelay(1);
e=0;
}
void lcd_init() //Function to prepare the LCD and get it ready
{
lcd_cmd(0×38); // for using 2 lines and 5X7 matrix of LCD
msdelay(10);
lcd_cmd(0x0F); // turn display ON, cursor blinking
msdelay(10);
lcd_cmd(0×01); //clear screen
msdelay(10);
lcd_cmd(0×80); // bring cursor to beginning of first line
msdelay(10);
}
void lcd_string(unsigned char *str) // Function to display string on LCD
{
int i=0;
while(str[i]!='')
{
lcd_data(str[i]);
i++;
msdelay(10);
if(i==15) lcd_cmd(0xc2);
}
return;
}
void GSM_write(unsigned char ch) // Function to send commands to GSM
{
SBUF=ch; // Put byte in SBUF to send to GSM
while(TI==0); //wait until the byte trasmission
TI=0; //clear TI to send next byte.
}
void GSM_read() // Function to read the response from GSM
{
while(RI==0); // Wait until the byte received
str[k]=SBUF; //storing byte in str array
RI=0; //clear RI to receive next byte
}

void main()
{
k=0;
lcd_init();
GSM_init();
msdelay(200);
lcd_string("Interfacing GSM with 8051");
msdelay(200);
lcd_cmd(0×01); // Clear LCD screen
msdelay(10);
GSM_write('A'); // Sending 'A' to GSM module
lcd_data('A');
msdelay(1);
GSM_write('T'); // Sending 'T' to GSM module
lcd_data('T');
msdelay(1);
GSM_write(0x0d); // Sending carriage return to GSM module
msdelay(50);
while(1)
{
GSM_read();
if(str[k-1]=='O' && str[k]=='K'){
lcd_data(0×20); // Write 'Space'
lcd_data(str[k-1]);
lcd_data(str[k]);
break;
}
k=k+1;
}
}

Posted in Uncategorized | Leave a comment

U-Boot’s bring-up

Even though unnecessary in most cases, it’s sometimes desired to modify U-Boot’s own bring-up process, in particular for initializing custom hardware during early stages. This section explains the basics of this part of U-Boot.

U-Boot is one of the first things to run on the processor, and may be responsible for the most basic hardware initialization. On some platforms the processor’s RAM isn’t configured when U-Boot starts running, so the underlying assumption is that U-Boot may run directly from ROM (typically flash memory).

The bring-up process’ key event is hence when U-Boot copies itself from where it runs in the beginning into RAM, from which it runs the more sophisticated tasks (handling boot commands in particular). This self-copy is referred to as “relocation”.

Almost needless to say, the processor runs in “real mode”: The MMU, if there is one, is off. There is no memory translation nor protection. U-Boot plays a few dirty tricks based on this.

In gross terms, the U-Boot loader runs through the following phases:

Pre-relocation initialization (possibly directly from flash or other kind of ROM)
Relocation: Copy the code to RAM.
Post-relocation initialization (from proper RAM).
Execution of commands: Through autoboot or console shell
Passing control to the Linux kernel (or other target application)
Note that in several scenarios, U-Boot starts from proper RAM to begin with, and consequently there is no actual relocation taking place. The division into pre-relocation and post-relocation becomes somewhat artificial in these scenarios, yet this is the terminology.

Posted in Uncategorized | Leave a comment

U-boot

U-Boot is an open source Universal Boot Loader that is frequently used in the Linux community. Xilinx provides a Git tree located at https://github.com/Xilinx/u-boot-xlnx which includes U-Boot to run on Xilinx boards. The Xilinx U-Boot project is based on the source code from http://git.denx.de.

U-Boot Commands
The list of U-Boot commands can be accessed while in the U-Boot prompt. Type “help” or “?” for a complete listing of available commands. Below an example is given:

? – alias for ‘help’
base – print or set address offset
bdinfo – print Board Info structure
boot – boot default, i.e., run ‘bootcmd’
bootd – boot default, i.e., run ‘bootcmd’
bootm – boot application image from memory
bootp – boot image via network using BOOTP/TFTP protocol
cmp – memory compare
coninfo – print console devices and information
cp – memory copy
crc32 – checksum calculation
date – get/set/reset date & time
echo – echo args to console
editenv – edit environment variable
erase – erase FLASH memory
ext2load- load binary file from a Ext2 filesystem
ext2ls – list files in a directory (default /)
fatinfo – print information about filesystem
fatload – load binary file from a dos filesystem
fatls – list files in a directory (default /)
fdt – flattened device tree utility commands
flinfo – print FLASH memory information
go – start application at address ‘addr’
help – print command description/usage
iminfo – print header information for application image
imls – list all images found in flash
imxtract- extract a part of a multi-image
itest – return true/false on integer compare
loadb – load binary file over serial line (kermit mode)
loads – load S-Record file over serial line
loady – load binary file over serial line (ymodem mode)
loop – infinite loop on address range
md – memory display
mm – memory modify (auto-incrementing address)
mmc – MMC sub system
mmcinfo – display MMC info
mtest – simple RAM read/write test
mw – memory write (fill)
nfs – boot image via network using NFS protocol
nm – memory modify (constant address)
ping – send ICMP ECHO_REQUEST to network host
printenv- print environment variables
protect – enable or disable FLASH write protection
rarpboot- boot image via network using RARP/TFTP protocol
reset – Perform RESET of the CPU
run – run commands in an environment variable
setenv – set environment variables
sf – SPI flash sub-system
sleep – delay execution for some time
source – run script from memory
sspi – SPI utility commands
tftpboot- boot image via network using TFTP protocol
version – print monitor version

Programming QSPI Flash

U-Boot provides the SF command to program serial flash devices. On the ZC702 board you can use the SF command to program a QSPI device. Here is an example of loading an image file to QSPI device.
uboot> sf
Usage:
sf probe [[bus:]cs] [hz] [mode] – init flash device on given SPI bus and chip select
sf read addr offset len – read ‘len’ bytes starting at ‘offset’ to memory at ‘addr’
sf write addr offset len – write ‘len’ bytes from memory at ‘addr’ to flash at ‘offset’
sf erase offset [+]len – erase ‘len’ bytes from ‘offset’; ‘+len’ round up ‘len’ to block size
sf update addr offset len – erase and write ‘len’bytes from memory at ‘addr’ to flash at ‘offset

uboot> sf probe 0 0 0
SF: Detected N25Q128 with page size 256, total 16 MiB
16384 KiB N25Q128 at 0:0 is now current device

// Detect QSPI Flash parameters
// To make QSPI clock run faster, higher speed can be set to second parameter,
// e.g. setting QSPI clock to 20MHz
// sf probe 0 20000000 0

uboot> sf erase 0 0×200000

// Erase 2MB from QSPI offset 0×0
// Note: If erase size is less than QSPI Flash page size, u-boot reports erase error

uboot> sf read 0×08000000 0 100

// Read QSPI Flash from 0×0 to DDR 0×08000000 with 100 bytes
// you can use any location in DDR as destination. make sure it doesnt overwrite u-boot
// code/data area. u-boot is at 0×04000000.

uboot> md 08000000
08000000: ffffffff ffffffff ffffffff ffffffff …………….

// Display content in memory 0×08000000.
// U-boot by default uses hex

// load the boot image to DDR
// load method can be KERMIT through UART, XMD dow -data through JTAG, TFTP through Ethernet
// or read from SD Card directly

zynq-boot> loadb 0×08000000

// load the boot image through KERMIT protocol after this step
// it is assumed that you should have a boot image generated using the bootgen utility

## Ready for binary (kermit) download to 0×08000000 at 115200 bps…
## Total Size = 0x0003e444 = 255044 Bytes
## Start Addr = 0×08000000
uboot> md 08000000 100

uboot> sf write 0×08000000 0 0x3E444

// Write from DDR address 0×08000000 to QSPI offset 0 with 0x3E444 bytes of data

// U-Boot read command can be used to see what is programmed in to QSPI memory.
// Following is the syntax of the “sf read” command.

zynq-boot> sf read

NOTE: The “destination address” should not be ZERO.

Example:

uboot> sf read 0×800 0×0 0×2000

Programming NAND Flash
U-Boot provides the nand command to program nand devices. Here is an example of loading an image file to nand device. The command sequence for nand is same as QSPI except the commands.Below nand command sequence for writing an image to nand device. The read command at the end just to ensure the data was written properly and you can use cmp command for comparing written data with original data which was lready present in DDR..
nand info
nand erase

// Download the image to a location DDR(DDR addr) using tftp and then perform write to nand from that DDR address as shown below.

nand write

// The nand programming was done wuith the above command but to ensure that it has written successfully just read the written data using the below read command.
// Provide DDR addr different from the above and differ from the above DDR addr at least by the so that we can compare both using cmp command and ensure it was written successfully.

nand read

Programming NOR Flash
U-Boot uses the regular memory command to program NOR devices. Here is command sequence of loading an image file to NOR device.
flinfo
erase all
cp.b .
Authentication and Decryption in zynq u-boot
Zynq U-boot can authenticate and decrypt the partitions before loading them for execution. u-boot initially loads the image to a location to DDR. Then this DDR location was passed as an argument to command “zynqrsa ” . The whole functionality was implemented under this command so that user can load image to DDR from tftp or anything and then user can provide that address for authenticating that image and then loading that.

The zynqrsa command authenticates or decrypts or both authenticate and decrypt the images and loads to DDR. The image has to be generated using bootgen with proper authentication and encryption keys. The zynq rsa command authenticates or decrypts only the images in which partition owner is mentioned as u-boot while preparing images using bootgen. This will enabled only if we enable the config “CONFIG_CMD_ZYNQ_RSA”. This config also enables the decryption functionality. The Decryption process can also invoked using the command “zynqaes ”. For more details check zynqaes help..

U-Boot 14.3 (and newer releases) Specific Details
U-Boot is now by default expecting a uImage Linux kernel image and a ramdisk that is also wrapped with the mkimage utility. It is using the bootm command by default now which also passes the address of the device tree to the Linux kernel. The Linux build process will build a uImage when the uImage target is specified on the make command line.

Mkimage Utility
The mkimage utility is part of U-Boot and is placed in the u-boot/tools directory during the build process. It is used to prepend a header onto the specified image such that U-Boot can verify an image was loaded into memory correctly.

Bootm Command Details
The bootm command has the following format:
bootm
The following U-Boot commands illustrate loading the Linux kernel uImage, a mkimage wrapped ramdisk, and a device tree into memory from the SD card and then booting the Linux kernel.
u-boot> fatload mmc 0 0×3000000 uImage
u-boot> fatload mmc 0 0x2A00000 devicetree.dtb
u-boot> fatload mmc 0 0×2000000 uramdisk.image.gz
u-boot> bootm 0×3000000 0×2000000 0x2A00000
With the bootm command, U-Boot is relocating the images before it boots Linux such that the addresses above may not be what the kernel sees. U-Boot also alters the device tree to tell the kernel where the ramdisk image is located in memory (initrd-start and initrd-end). The bootm command sets the r2 register to the address of the device tree in memory which is not done by the go command.

Posted in Uncategorized | Leave a comment

Linux and the Device Tree

Linux and the Device Tree
————————-
The Linux usage model for device tree data

Author: Grant Likely

This article describes how Linux uses the device tree. An overview of
the device tree data format can be found on the device tree usage page
at devicetree.org[1].

[1] http://devicetree.org/Device_Tree_Usage

The “Open Firmware Device Tree”, or simply Device Tree (DT), is a data
structure and language for describing hardware. More specifically, it
is a description of hardware that is readable by an operating system
so that the operating system doesn’t need to hard code details of the
machine.

Structurally, the DT is a tree, or acyclic graph with named nodes, and
nodes may have an arbitrary number of named properties encapsulating
arbitrary data. A mechanism also exists to create arbitrary
links from one node to another outside of the natural tree structure.

Conceptually, a common set of usage conventions, called ‘bindings’,
is defined for how data should appear in the tree to describe typical
hardware characteristics including data busses, interrupt lines, GPIO
connections, and peripheral devices.

As much as possible, hardware is described using existing bindings to
maximize use of existing support code, but since property and node
names are simply text strings, it is easy to extend existing bindings
or create new ones by defining new nodes and properties. Be wary,
however, of creating a new binding without first doing some homework
about what already exists. There are currently two different,
incompatible, bindings for i2c busses that came about because the new
binding was created without first investigating how i2c devices were
already being enumerated in existing systems.

1. History
———-
The DT was originally created by Open Firmware as part of the
communication method for passing data from Open Firmware to a client
program (like to an operating system). An operating system used the
Device Tree to discover the topology of the hardware at runtime, and
thereby support a majority of available hardware without hard coded
information (assuming drivers were available for all devices).

Since Open Firmware is commonly used on PowerPC and SPARC platforms,
the Linux support for those architectures has for a long time used the
Device Tree.

In 2005, when PowerPC Linux began a major cleanup and to merge 32-bit
and 64-bit support, the decision was made to require DT support on all
powerpc platforms, regardless of whether or not they used Open
Firmware. To do this, a DT representation called the Flattened Device
Tree (FDT) was created which could be passed to the kernel as a binary
blob without requiring a real Open Firmware implementation. U-Boot,
kexec, and other bootloaders were modified to support both passing a
Device Tree Binary (dtb) and to modify a dtb at boot time. DT was
also added to the PowerPC boot wrapper (arch/powerpc/boot/*) so that
a dtb could be wrapped up with the kernel image to support booting
existing non-DT aware firmware.

Some time later, FDT infrastructure was generalized to be usable by
all architectures. At the time of this writing, 6 mainlined
architectures (arm, microblaze, mips, powerpc, sparc, and x86) and 1
out of mainline (nios) have some level of DT support.

2. Data Model
————-
If you haven’t already read the Device Tree Usage[1] page,
then go read it now. It’s okay, I’ll wait….

2.1 High Level View
——————-
The most important thing to understand is that the DT is simply a data
structure that describes the hardware. There is nothing magical about
it, and it doesn’t magically make all hardware configuration problems
go away. What it does do is provide a language for decoupling the
hardware configuration from the board and device driver support in the
Linux kernel (or any other operating system for that matter). Using
it allows board and device support to become data driven; to make
setup decisions based on data passed into the kernel instead of on
per-machine hard coded selections.

Ideally, data driven platform setup should result in less code
duplication and make it easier to support a wide range of hardware
with a single kernel image.

Linux uses DT data for three major purposes:
1) platform identification,
2) runtime configuration, and
3) device population.

2.2 Platform Identification
—————————
First and foremost, the kernel will use data in the DT to identify the
specific machine. In a perfect world, the specific platform shouldn’t
matter to the kernel because all platform details would be described
perfectly by the device tree in a consistent and reliable manner.
Hardware is not perfect though, and so the kernel must identify the
machine during early boot so that it has the opportunity to run
machine-specific fixups.

In the majority of cases, the machine identity is irrelevant, and the
kernel will instead select setup code based on the machine’s core
CPU or SoC. On ARM for example, setup_arch() in
arch/arm/kernel/setup.c will call setup_machine_fdt() in
arch/arm/kernel/devtree.c which searches through the machine_desc
table and selects the machine_desc which best matches the device tree
data. It determines the best match by looking at the ‘compatible’
property in the root device tree node, and comparing it with the
dt_compat list in struct machine_desc (which is defined in
arch/arm/include/asm/mach/arch.h if you’re curious).

The ‘compatible’ property contains a sorted list of strings starting
with the exact name of the machine, followed by an optional list of
boards it is compatible with sorted from most compatible to least. For
example, the root compatible properties for the TI BeagleBoard and its
successor, the BeagleBoard xM board might look like, respectively:

compatible = “ti,omap3-beagleboard”, “ti,omap3450″, “ti,omap3″;
compatible = “ti,omap3-beagleboard-xm”, “ti,omap3450″, “ti,omap3″;

Where “ti,omap3-beagleboard-xm” specifies the exact model, it also
claims that it compatible with the OMAP 3450 SoC, and the omap3 family
of SoCs in general. You’ll notice that the list is sorted from most
specific (exact board) to least specific (SoC family).

Astute readers might point out that the Beagle xM could also claim
compatibility with the original Beagle board. However, one should be
cautioned about doing so at the board level since there is typically a
high level of change from one board to another, even within the same
product line, and it is hard to nail down exactly what is meant when one
board claims to be compatible with another. For the top level, it is
better to err on the side of caution and not claim one board is
compatible with another. The notable exception would be when one
board is a carrier for another, such as a CPU module attached to a
carrier board.

One more note on compatible values. Any string used in a compatible
property must be documented as to what it indicates. Add
documentation for compatible strings in Documentation/devicetree/bindings.

Again on ARM, for each machine_desc, the kernel looks to see if
any of the dt_compat list entries appear in the compatible property.
If one does, then that machine_desc is a candidate for driving the
machine. After searching the entire table of machine_descs,
setup_machine_fdt() returns the ‘most compatible’ machine_desc based
on which entry in the compatible property each machine_desc matches
against. If no matching machine_desc is found, then it returns NULL.

The reasoning behind this scheme is the observation that in the majority
of cases, a single machine_desc can support a large number of boards
if they all use the same SoC, or same family of SoCs. However,
invariably there will be some exceptions where a specific board will
require special setup code that is not useful in the generic case.
Special cases could be handled by explicitly checking for the
troublesome board(s) in generic setup code, but doing so very quickly
becomes ugly and/or unmaintainable if it is more than just a couple of
cases.

Instead, the compatible list allows a generic machine_desc to provide
support for a wide common set of boards by specifying “less
compatible” values in the dt_compat list. In the example above,
generic board support can claim compatibility with “ti,omap3″ or
“ti,omap3450″. If a bug was discovered on the original beagleboard
that required special workaround code during early boot, then a new
machine_desc could be added which implements the workarounds and only
matches on “ti,omap3-beagleboard”.

PowerPC uses a slightly different scheme where it calls the .probe()
hook from each machine_desc, and the first one returning TRUE is used.
However, this approach does not take into account the priority of the
compatible list, and probably should be avoided for new architecture
support.

2.3 Runtime configuration
————————-
In most cases, a DT will be the sole method of communicating data from
firmware to the kernel, so also gets used to pass in runtime and
configuration data like the kernel parameters string and the location
of an initrd image.

Most of this data is contained in the /chosen node, and when booting
Linux it will look something like this:

chosen {
bootargs = “console=ttyS0,115200 loglevel=8″;
initrd-start = ;
initrd-end = ;
};

The bootargs property contains the kernel arguments, and the initrd-*
properties define the address and size of an initrd blob. Note that
initrd-end is the first address after the initrd image, so this doesn’t
match the usual semantic of struct resource. The chosen node may also
optionally contain an arbitrary number of additional properties for
platform-specific configuration data.

During early boot, the architecture setup code calls of_scan_flat_dt()
several times with different helper callbacks to parse device tree
data before paging is setup. The of_scan_flat_dt() code scans through
the device tree and uses the helpers to extract information required
during early boot. Typically the early_init_dt_scan_chosen() helper
is used to parse the chosen node including kernel parameters,
early_init_dt_scan_root() to initialize the DT address space model,
and early_init_dt_scan_memory() to determine the size and
location of usable RAM.

On ARM, the function setup_machine_fdt() is responsible for early
scanning of the device tree after selecting the correct machine_desc
that supports the board.

2.4 Device population
———————
After the board has been identified, and after the early configuration data
has been parsed, then kernel initialization can proceed in the normal
way. At some point in this process, unflatten_device_tree() is called
to convert the data into a more efficient runtime representation.
This is also when machine-specific setup hooks will get called, like
the machine_desc .init_early(), .init_irq() and .init_machine() hooks
on ARM. The remainder of this section uses examples from the ARM
implementation, but all architectures will do pretty much the same
thing when using a DT.

As can be guessed by the names, .init_early() is used for any machine-
specific setup that needs to be executed early in the boot process,
and .init_irq() is used to set up interrupt handling. Using a DT
doesn’t materially change the behaviour of either of these functions.
If a DT is provided, then both .init_early() and .init_irq() are able
to call any of the DT query functions (of_* in include/linux/of*.h) to
get additional data about the platform.

The most interesting hook in the DT context is .init_machine() which
is primarily responsible for populating the Linux device model with
data about the platform. Historically this has been implemented on
embedded platforms by defining a set of static clock structures,
platform_devices, and other data in the board support .c file, and
registering it en-masse in .init_machine(). When DT is used, then
instead of hard coding static devices for each platform, the list of
devices can be obtained by parsing the DT, and allocating device
structures dynamically.

The simplest case is when .init_machine() is only responsible for
registering a block of platform_devices. A platform_device is a concept
used by Linux for memory or I/O mapped devices which cannot be detected
by hardware, and for ‘composite’ or ‘virtual’ devices (more on those
later). While there is no ‘platform device’ terminology for the DT,
platform devices roughly correspond to device nodes at the root of the
tree and children of simple memory mapped bus nodes.

About now is a good time to lay out an example. Here is part of the
device tree for the NVIDIA Tegra board.

/{
compatible = “nvidia,harmony”, “nvidia,tegra20″;
#address-cells = ;
#size-cells = ;
interrupt-parent = ;

chosen { };
aliases { };

memory {
device_type = “memory”;
reg = ;
};

soc {
compatible = “nvidia,tegra20-soc”, “simple-bus”;
#address-cells = ;
#size-cells = ;
ranges;

intc: interrupt-controller@50041000 {
compatible = “nvidia,tegra20-gic”;
interrupt-controller;
#interrupt-cells = ;
reg = , ;
};

serial@70006300 {
compatible = “nvidia,tegra20-uart”;
reg = ;
interrupts = ;
};

i2s1: i2s@70002800 {
compatible = “nvidia,tegra20-i2s”;
reg = ;
interrupts = ;
codec = ;
};

i2c@7000c000 {
compatible = “nvidia,tegra20-i2c”;
#address-cells = ;
#size-cells = ;
reg = ;
interrupts = ;

wm8903: codec@1a {
compatible = “wlf,wm8903″;
reg = ;
interrupts = ;
};
};
};

sound {
compatible = “nvidia,harmony-sound”;
i2s-controller = ;
i2s-codec = ;
};
};

At .init_machine() time, Tegra board support code will need to look at
this DT and decide which nodes to create platform_devices for.
However, looking at the tree, it is not immediately obvious what kind
of device each node represents, or even if a node represents a device
at all. The /chosen, /aliases, and /memory nodes are informational
nodes that don’t describe devices (although arguably memory could be
considered a device). The children of the /soc node are memory mapped
devices, but the codec@1a is an i2c device, and the sound node
represents not a device, but rather how other devices are connected
together to create the audio subsystem. I know what each device is
because I’m familiar with the board design, but how does the kernel
know what to do with each node?

The trick is that the kernel starts at the root of the tree and looks
for nodes that have a ‘compatible’ property. First, it is generally
assumed that any node with a ‘compatible’ property represents a device
of some kind, and second, it can be assumed that any node at the root
of the tree is either directly attached to the processor bus, or is a
miscellaneous system device that cannot be described any other way.
For each of these nodes, Linux allocates and registers a
platform_device, which in turn may get bound to a platform_driver.

Why is using a platform_device for these nodes a safe assumption?
Well, for the way that Linux models devices, just about all bus_types
assume that its devices are children of a bus controller. For
example, each i2c_client is a child of an i2c_master. Each spi_device
is a child of an SPI bus. Similarly for USB, PCI, MDIO, etc. The
same hierarchy is also found in the DT, where I2C device nodes only
ever appear as children of an I2C bus node. Ditto for SPI, MDIO, USB,
etc. The only devices which do not require a specific type of parent
device are platform_devices (and amba_devices, but more on that
later), which will happily live at the base of the Linux /sys/devices
tree. Therefore, if a DT node is at the root of the tree, then it
really probably is best registered as a platform_device.

Linux board support code calls of_platform_populate(NULL, NULL, NULL, NULL)
to kick off discovery of devices at the root of the tree. The
parameters are all NULL because when starting from the root of the
tree, there is no need to provide a starting node (the first NULL), a
parent struct device (the last NULL), and we’re not using a match
table (yet). For a board that only needs to register devices,
.init_machine() can be completely empty except for the
of_platform_populate() call.

In the Tegra example, this accounts for the /soc and /sound nodes, but
what about the children of the SoC node? Shouldn’t they be registered
as platform devices too? For Linux DT support, the generic behaviour
is for child devices to be registered by the parent’s device driver at
driver .probe() time. So, an i2c bus device driver will register a
i2c_client for each child node, an SPI bus driver will register
its spi_device children, and similarly for other bus_types.
According to that model, a driver could be written that binds to the
SoC node and simply registers platform_devices for each of its
children. The board support code would allocate and register an SoC
device, a (theoretical) SoC device driver could bind to the SoC device,
and register platform_devices for /soc/interrupt-controller, /soc/serial,
/soc/i2s, and /soc/i2c in its .probe() hook. Easy, right?

Actually, it turns out that registering children of some
platform_devices as more platform_devices is a common pattern, and the
device tree support code reflects that and makes the above example
simpler. The second argument to of_platform_populate() is an
of_device_id table, and any node that matches an entry in that table
will also get its child nodes registered. In the Tegra case, the code
can look something like this:

static void __init harmony_init_machine(void)
{
/* … */
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}

“simple-bus” is defined in the ePAPR 1.0 specification as a property
meaning a simple memory mapped bus, so the of_platform_populate() code
could be written to just assume simple-bus compatible nodes will
always be traversed. However, we pass it in as an argument so that
board support code can always override the default behaviour.

[Need to add discussion of adding i2c/spi/etc child devices]

Appendix A: AMBA devices
————————

ARM Primecells are a certain kind of device attached to the ARM AMBA
bus which include some support for hardware detection and power
management. In Linux, struct amba_device and the amba_bus_type is
used to represent Primecell devices. However, the fiddly bit is that
not all devices on an AMBA bus are Primecells, and for Linux it is
typical for both amba_device and platform_device instances to be
siblings of the same bus segment.

When using the DT, this creates problems for of_platform_populate()
because it must decide whether to register each node as either a
platform_device or an amba_device. This unfortunately complicates the
device creation model a little bit, but the solution turns out not to
be too invasive. If a node is compatible with “arm,amba-primecell”, then
of_platform_populate() will register it as an amba_device instead of a
platform_device.

Posted in Uncategorized | Leave a comment

RICS and CICS

The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it’s predecessor: CISC (Complex Instruction Set Computers) architecture

Multiplying Two Numbers in Memory
On the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let’s say we want to find the product of two numbers – one stored in location 2:3 and another stored in location 5:2 – and then store the product back in the location 2:3.

The CISC Approach
The primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this particular task, a CISC processor would come prepared with a specific instruction (we’ll call it “MULT”). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction:

MULT 2:3, 5:2
MULT is what is known as a “complex instruction.” It operates directly on the computer’s memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let “a” represent the value of 2:3 and “b” represent the value of 5:2, then this command is identical to the C statement “a = a * b.”

One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware.

The RISC Approach
RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the “MULT” command described above could be divided into three separate commands: “LOAD,” which moves data from the memory bank to a register, “PROD,” which finds the product of two operands located within the registers, and “STORE,” which moves data from a register to the memory banks. In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly:

LOAD A, 2:3
LOAD B, 5:2
PROD A, B
STORE 2:3, A
At first, this may seem like a much less efficient way of completing the operation. Because there are more lines of code, more RAM is needed to store the assembly level instructions. The compiler must also perform more work to convert a high-level language statement into code of this form.

CISC RISC
Emphasis on hardware Emphasis on software
Includes multi-clock
complex instructions Single-clock,
reduced instruction only
Memory-to-memory:
“LOAD” and “STORE”
incorporated in instructions Register to register:
“LOAD” and “STORE”
are independent instructions
Small code sizes,
high cycles per second Low cycles per second,
large code sizes
Transistors used for storing
complex instructions Spends more transistors
on memory registers

However, the RISC strategy also brings some very important advantages. Because each instruction requires only one clock cycle to execute, the entire program will execute in approximately the same amount of time as the multi-cycle “MULT” command. These RISC “reduced instructions” require less transistors of hardware space than the complex instructions, leaving more room for general purpose registers. Because all of the instructions execute in a uniform amount of time (i.e. one clock), pipelining is possible.

Separating the “LOAD” and “STORE” instructions actually reduces the amount of work that the computer must perform. After a CISC-style “MULT” command is executed, the processor automatically erases the registers. If one of the operands needs to be used for another computation, the processor must re-load the data from the memory bank into a register. In RISC, the operand will remain in the register until another value is loaded in its place.

The Performance Equation
The following equation is commonly used for expressing a computer’s performance ability:

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.
RISC Roadblocks
Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support.

Although Apple’s Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3.1 and Windows 95 were designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive.

Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors. Although RISC chips might surpass Intel’s efforts in specific areas, the differences were not great enough to persuade buyers to change technologies.

The Overall RISC Advantage
Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

Posted in Uncategorized | Leave a comment

spi device driver

In the user space

Once you will have this set you can boot your sunxi device and you will have in your dev in /dev/spidevn.0
Transfer size is limited to 64 bytes on sun4i and 128 bytes on sun6i. You have to loop over longer messages in your code. Some SPI devices may require that you prefix each message fragment with a header, other may not. YMMV. Look up transfer diagrams in device datasheet.
Known problems: Using the spidev_test.c example you will receive [spi]: drivers/spi/spi_sunxi.c(L1025) cpu tx data time out!
Using the spidev_fdx.c method it works like a charm! :)

I’ve made a user friendlier library (C functions) to comunicate using SPIdev:
(Note, this library supose the read and write address to be 2 bytes)

#include
#include
#include
#include
#include
#include
#include
#include
#include

char buf[10];
char buf2[10];
extern int com_serial;
extern int failcount;

struct spi_ioc_transfer xfer[2];

//////////
// Init SPIdev
//////////
int spi_init(char filename[40])
{
int file;
__u8 mode, lsb, bits;
__u32 speed=2500000;

if ((file = open(filename,O_RDWR)) < 0)
{
printf("Failed to open the bus.");
/* ERROR HANDLING; you can check errno to see what went wrong */
com_serial=0;
exit(1);
}

///////////////
// Verifications
///////////////
//possible modes: mode |= SPI_LOOP; mode |= SPI_CPHA; mode |= SPI_CPOL; mode |= SPI_LSB_FIRST; mode |= SPI_CS_HIGH; mode |= SPI_3WIRE; mode |= SPI_NO_CS; mode |= SPI_READY;
//multiple possibilities using |
/*
if (ioctl(file, SPI_IOC_WR_MODE, &mode)<0) {
perror("can't set spi mode");
return;
}
*/

if (ioctl(file, SPI_IOC_RD_MODE, &mode) < 0)
{
perror("SPI rd_mode");
return;
}
if (ioctl(file, SPI_IOC_RD_LSB_FIRST, &lsb) < 0)
{
perror("SPI rd_lsb_fist");
return;
}
//sunxi supports only 8 bits
/*
if (ioctl(file, SPI_IOC_WR_BITS_PER_WORD, 8)<0)
{
perror("can't set bits per word");
return;
}
*/
if (ioctl(file, SPI_IOC_RD_BITS_PER_WORD, &bits) < 0)
{
perror("SPI bits_per_word");
return;
}
/*
if (ioctl(file, SPI_IOC_WR_MAX_SPEED_HZ, &speed)<0)
{
perror("can't set max speed hz");
return;
}
*/
if (ioctl(file, SPI_IOC_RD_MAX_SPEED_HZ, &speed) < 0)
{
perror("SPI max_speed_hz");
return;
}

printf("%s: spi mode %d, %d bits %sper word, %d Hz max\n",filename, mode, bits, lsb ? "(lsb first) " : "", speed);

//xfer[0].tx_buf = (unsigned long)buf;
xfer[0].len = 3; /* Length of command to write*/
xfer[0].cs_change = 0; /* Keep CS activated */
xfer[0].delay_usecs = 0, //delay in us
xfer[0].speed_hz = 2500000, //speed
xfer[0].bits_per_word = 8, // bites per word 8

//xfer[1].rx_buf = (unsigned long) buf2;
xfer[1].len = 4; /* Length of Data to read */
xfer[1].cs_change = 0; /* Keep CS activated */
xfer[0].delay_usecs = 0;
xfer[0].speed_hz = 2500000;
xfer[0].bits_per_word = 8;

return file;
}

//////////
// Read n bytes from the 2 bytes add1 add2 address
//////////

char * spi_read(int add1,int add2,int nbytes,int file)
{
int status;

memset(buf, 0, sizeof buf);
memset(buf2, 0, sizeof buf2);
buf[0] = 0×01;
buf[1] = add1;
buf[2] = add2;
xfer[0].tx_buf = (unsigned long)buf;
xfer[0].len = 3; /* Length of command to write*/
xfer[1].rx_buf = (unsigned long) buf2;
xfer[1].len = nbytes; /* Length of Data to read */
status = ioctl(file, SPI_IOC_MESSAGE(2), xfer);
if (status =1) buf[3] = value[0];
if (nbytes>=2) buf[4] = value[1];
if (nbytes>=3) buf[5] = value[2];
if (nbytes>=4) buf[6] = value[3];
xfer[0].tx_buf = (unsigned long)buf;
xfer[0].len = nbytes+3; /* Length of command to write*/
status = ioctl(file, SPI_IOC_MESSAGE(1), xfer);
if (status < 0)
{
perror("SPI_IOC_MESSAGE");
return;
}
//printf("env: %02x %02x %02x\n", buf[0], buf[1], buf[2]);
//printf("ret: %02x %02x %02x %02x\n", buf2[0], buf2[1], buf2[2], buf2[3]);

com_serial=1;
failcount=0;
}

Usage example:
char *buffer;
char buf[10];

file=spi_init("/dev/spidev0.0"); //dev

buf[0] = 0×41;
buf[1] = 0xFF;
spi_write(0xE6,0x0E,2,buf,file); //this will write value 0x41FF to the address 0xE60E

buffer=(char *)spi_read(0xE6,0x0E,4,file); //reading the address 0xE60E

close(file);

For info it is possible to use all the 12000000 Hz frequency limit transfers.
In the kernel space

If you are coding a driver for a SPI device, it makes most sense to code it as a kernel module. Instead of using /dev/spidevX.X you should register a new (slave) device and exchange data through it. If you are wondering what bus number you should use, you can find available buses by listing /sys/class/spi_master. There should be nodes like spi0, spi1… Number after spi is bus number. What number gets spi master depends on device-tree configuration.

Here is an example of module, that writes 0×00 to SPI when module is initialized and 0xff when uninitialized. It is using bus number 0 and communicating at the speed of 1Hz:

#include
#include
#include

#define MY_BUS_NUM 0
static struct spi_device *spi_device;

static int __init spi_init(void)
{
int ret;
unsigned char ch = 0×00;
struct spi_master *master;

//Register information about your slave device:
struct spi_board_info spi_device_info = {
.modalias = “my-device-driver-name”,
.max_speed_hz = 1, //speed your device (slave) can handle
.bus_num = MY_BUS_NUM,
.chip_select = 0,
.mode = 3,
};

/*To send data we have to know what spi port/pins should be used. This information
can be found in the device-tree. */
master = spi_busnum_to_master( spi_device_info.bus_num );
if( !master ){
printk(“MASTER not found.\n”);
return -ENODEV;
}

// create a new slave device, given the master and device info
spi_device = spi_new_device( master, &spi_device_info );

if( !spi_device ) {
printk(“FAILED to create slave.\n”);
return -ENODEV;
}

spi_device->bits_per_word = 8;

ret = spi_setup( spi_device );

if( ret ){
printk(“FAILED to setup slave.\n”);
spi_unregister_device( spi_device );
return -ENODEV;
}

spi_write(spi_device, &ch, sizeof(ch));

return 0;
}

static void __exit spi_exit(void)
{
unsigned char ch = 0Xff;

if( spi_device ){
spi_write(spi_device, &ch, sizeof(ch));
spi_unregister_device( spi_device );
}
}

module_init(spi_init);
module_exit(spi_exit);

MODULE_LICENSE(“GPL”);
MODULE_AUTHOR(“Piktas Zuikis “);
MODULE_DESCRIPTION(“SPI module example”);

Posted in Uncategorized | Leave a comment