The build process of programming languages

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

When I wrote my first program, my trainer told me I merely wrote the source code. Now, I have to translate it to a language understandable by computers. We call that compilation. It happens by clicking Ctrl+F5 (In Visual Studio). And there you have it, your program is now an executable. That's how they initially thought me what the build process of a program is. And that is a good enough explanation for beginners. But at one point, I realized that when I click Ctrl+F5, some processes happen behind the scenes which we don't see. Those processes we will explore in today's article. And did you know, that when you click Ctrl+F5, the processes used are different for different languages? Have you ever wondered  why is it harder to code in C++ than it is in C#? Well, we won't be able to explore the whole details of the last question. That has a lot to do with language design and the decisions being taken during the years. But we will explore the fundamental difference between those languages. That difference lies in their build process.
Continue Reading

Languages High and Low

languages high and low

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

When I started programming, I got introduced to C# and I thought it was pretty fun. As I advanced in my studies, I learned other programming languages as well. I learned JavaScript, PHP, Java. Even though I learned to code in these languages, what I didn't understand is why are there so many languages? What purpose did they all serve? Furthermore, I was curious where did all these languages came from? How did they come to be? What is a low-level language and why does it still exist? The goal of this article is to try and help you find the answers for some of these questions and to further fire up your curiosity in the nature of programming languages and computers. I will walk you through the evolution of modern programming languages. Why did they come to be and what problem did they solve.
Continue Reading

Understanding Standard Input and Output

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Somewhere in the first lectures of a programming basics course, we are shown how to take input and show output on the terminal. That's called standard input/output or just Standard IO for short. So, in C# we have Console.WriteLine and Console.ReadLine. In C++, we have cin and cout. All these things are associated with the topic of Standard IO. And what they tell us is that the standard input is the keyboard and the standard output is the screen. And for the most part, that is the case. But what we don't get told is that the Standard IO can be changed. There is a way to accept input from a file and redirect output to another file. No, I'm not talking about writing code to read/write files. I am talking about using the Standard IO for the job, via the terminal.
Continue Reading

What you don’t know about sorting algorithms

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Last time, we delved into bitwise operations. This time, we will look at a more high level computer science concept - algorithms. When we first get introduced to algorithms, we normally start with learning sorting algorithms. In comparison to other algorithms, they are easier to grasp. And if we pay attention in class, we will do a good job at understanding them. However, what we don't learn in these classes is when can they be useful.
Continue Reading

Introduction to bitwise operations

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Last time, we talked about character sets and encoding. This time, we will return to dealing with binary numbers. However, this time we won't examine how binary numbers work and what is their nature. We have covered that in previous articles. Today, we will see how to apply that knowledge in practice by examining how bitwise operations work. This topic is usually neglected in a traditional computer science curriculum (At least it is in some universities I know). But I think that this knowledge can be useful for two reasons:
  1. Expanding your computer science knowledge by gaining a deeper understanding of binary numbers and of low-level computer science aspects.
  2. Gaining a valuable tool which can be useful when pursuing specialization as a low-level programmer (Embedded developer, for example).
We will start by examining what tools do we have at our disposal - the operations which modern programming languages provide us with. Then we will move on to applying that knowledge for actually manipulating numbers in a binary fashion and finally - we will see some real-world examples of how bitwise operations are used to achieve a highly efficient system.
Continue Reading

What you need to know about character sets and encoding

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

My last article was about different data types and some tricks with them. We talked a little about characters as well. However, working with them can be a little bit strange due to the presence of a fancy term in computing called encoding. Today, my friend asked me to go and fix the subtitles for his movies. He had been telling me that some strange symbols appear all the time. So he tried reinstalling windows and changing all sorts of options but nothing seemed to work. He clearly had no idea what an encoding is. However, I guess that is normal since he doesn't have a CS background. But there seems to be a lot of developers out there (me, including, in the old days) who don't know what encoding means. Surely, they might have heard of UTF-8, but what is it? We have ASCII right? Well, I am going to address the issue of encoding in this article as I think it is fundamental to anyone getting his hands dirty with programming and computing. It seems not many programming basics courses cover this topic in much detail.
Continue Reading

How the binary nature of computers affects our data types

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

In the past few weeks, we have discussed the different ways computers deal with binary numbers in order to represent the numbers we are used to see - positive, negative and real. This time, we will take a step back from diving in the details of how the hardware deals with such issues and focus on how the design decisions, taken by computer architects, affect the way we represent data in our code. Particularly, we shall explore the different "features" that data types, that we use in our code, have hidden for us.
Continue Reading

Floating point numbers

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Hey, it has been a while since I last wrote an article on these series. Last time, we covered negative binary numbers and the different ways of representing them in a computer. This time, we will explain how to deal with real numbers. More specifically, we will briefly discuss fixed point numbers and then we will move on to the core of this article - floating point numbers.
Continue Reading

Negative binary numbers

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Computers store data using numbers and last time, we covered how they store positive numbers in binary. But our adventure will be incomplete if we don't present how to store negative numbers. This time, we will explore different variants of storing negative binary numbers and we shall see why do we store them that way.
Continue Reading

Introduction to binary numbers

This article is part of the sequence The Basics You Won't Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Last time, we covered how does a processor work. We mentioned that he used instructions, which are encoded in numbers. But these numbers are stored in a computer in binary digits. Today, I begin a series on posts on how binary numbers work.
Continue Reading

Site Footer

BulgariaEnglish