The first version, developed by Mark Otto and Jacob Thornton at Twitter in 2011, was a pure CSS library. It was designed to provide a solid foundation for the most common user interface tasks that front-end developers faced every day, including grids and page layouts, typography, navigation, input and forms. It hid the chronic problems of layout incompatibilities between different browser versions, and offered a consistent styling and customization mechanism with the LESS CSS dynamic stylesheet language. Clearly, it fulfilled that need very successfully: by 2012 it had become the most watched and forked project on GitHub.
But 2011/2012 was also the year that SmartPhones and Tablets exploded onto the consumer scene. In the UK alone, 34.3% of the population had SmartPhones in 2011, and by 2012 this figure had grown to 41.7%. In the US, there were over 28 million iPad users in 2011, growing to over 53 million in 2012. (Data from New Media Trend Watch.)
Web developers quickly learned that the experience of browsing on these mobile devices with their smaller screens, limited bandwidth and touch (or pen) interfaces was very poor.
The first response was to try to develop “mobile” versions of existing sites. These chopped out functionality, using much simpler page layout and smaller (or no) images to help reduce the bandwidth demands, and fewer complex user interactions or animations that mobile browsers couldn’t support, or don’t work so well when you’re prodding a screen with a finger or stylus. This was so-called graceful degradation. It was usually implemented by detecting that you were running on a mobile browser, and redirecting the user to a special mobile-only site.
However, there was another school of thought. Steven Chameon, at a SXSW conference in 2003, had coined the term Progressive Enhancement for an approach to design that starts out by considering the simplest version (using very basic mark-up and navigation features that are available on all browsers), and enhances it if more features are available on the user’s client platform, by linking external CSS and JavaScript. The progressive enhancement movement also embraced the need for developers to consider issues of accessibility and semantic structuring of content.
The SmartPhone revolution of 2011 brought these ideas into the mainstream. The notion of designing for the simplest version first became known as mobile first, and the notion of seamlessly enhancing and adapting layout and interactions with CSS and JavaScript became known as responsive design. Responsive design was particularly appealing to developers who struggled with the costs and challenges of developing and maintaining two separate sites for mobile and desktop clients, and it became one of .Net magazine’s top trends for web development in 2012.
The Bootstrap developers were aware of the need to support responsive design techniques, and in 2012 they released Version 2 of the framework. This included some (optional) files that used the newly enhanced CSS3 @media
queries (which were widely supported by both desktop and mobile browsers) to allow the grid and other layout elements to adapt to different screen sizes. You could choose whether to use a fixed grid (where the grid columns are fixed pixel size), a breakpoint approach (where the grid switches size at particular thresholds corresponding to common screen-widths) or a fluid approach (where the grid adapts seamlessly as the display size changes on a particular device, or as a browser window is resized).
By 2013, it became apparent that it would probably be less than a year before a majority of page views on the web would be from mobile devices (or, at least, non-desktop devices, including tablet, mobile and smart TVs). The argument for a mobile first design philosophy becomes much stronger if the majority of your visitors might be coming in on a mobile device!
With that in mind, the Bootstrap team took a strategic decision to bake responsive design into the core of the framework for their version 3 release, and to encourage a mobile-first approach.
Their key goals were:
In this series, we’re going to look at some techniques for mobile-first design. We’ll consider the needs of the mobile user versus a desktop or tablet user, along with the impossible challenge of being all things to all people. We’ll see how to use Bootstrap Version 3 to progressively enhance their experience and minimize the impact on power consumption, bandwidth, SEO and accessibility, without adversely impacting the cost of developing and maintaining the code.
We’re also going to see how Bootstrap alone is not enough to meet the technical requirements of a mobile-first design philosophy, and how some simple CSS and JavaScript techniques can be used to help optimize the implementation of your site for a mobile user.
We’re also going to look at how to work with and overcome the constraints of the Bootstrap framework, and learn how to produce semantic HTML, customized to your particular requirements.
We’ll get started by looking at some basic tools of responsive design (including the Boostrap 3 grid system), and then think about what mobile-first means for the creative process.
]]>We’ve learned a bit about computer architecture – all the way up from the transistors that form the physical basis of modern digital computers, to more abstract concepts like data memory, instructions and I/O. We’ve looked at how we can encode information like the positive and negative integers into the on-and-off world of a digital computer, using 1s and 0s (bits) and some of the ways in which the constraints of that digital representation differ from “regular” mathematics.
We’ve also learned about Boolean algebra, and how to construct complex logical propositions from simple operators, as the foundation for our decision making.
We’ve written our first programs, using very low level instructions like the ones provided by the processor manufacturers, and seen how we can estimate the cost of the algorithm those programs embody in terms of both the size of the program and amount of memory it consumes (storage costs), and the time it takes to execute (computational cost).
The complexity of writing programs at that very low level quickly became apparent, and we started to look at a higher-level, more declarative programming language called F# to help us express the intent of code more clearly.
In our first real foray with F#, we learned how to create a function which takes an input value and maps it to an output value. We also saw how we could compose functions (using the output of one function as the input of another function) to solve more complex problems; in our first example we used a function that itself returned a function to simulate a function with multiple parameters, and then used that technique to implement the XOR derived operator in F#.
In this section we’re going to start looking at a more real-world problem, and see how we can use functions to solve it.
Right. Let’s imagine that I’m sick of my job, and I have designated 14:00-15:00 as my official “Lottery Fantasy Hour”. There’s even a cost centre for it on my timesheet.
The average jackpot prize on the UK National Lottery is about £2,000,000. The way it works is that you select 6 numbers from 1…49. On the day of the draw, 6 “main numbers” are drawn. Match all those, and you win the jackpot. The order of the numbers doesn’t matter – just whether they match or not. There’s a load of other extraneous rubbish about a bonus ball that comes in to play if you’ve matched 5 main numbers, but they don’t win you the jackpot. And in my lottery fantasy, it is all about the Jackpot. OK?
So, the question is, what are the odds of my winning the jackpot?
Well, I’ve got 6 numbers out of the 49.
When the draw happens, the chance that the first ball that comes out of the machine is one of mine is therefore 6 out of 49. (There are just 6 balls that could come out, from the 49, that would match one of my numbers.)
If that matched, then I’ve got 5 numbers left; and there are 48 balls in the machine. So the chance that the second ball that comes out is one of mine is now 5 out of 48.
If I’m still in the game, then I’ve got 4 numbers left to choose from, and there are 47 balls still in the machine. So the chance that the third ball that comes out matches one of those is 4 out of 47.
You can see where this is going. If that matched too, then I end up with 3 out of 46, 2 out of 45 and finally (and I’m on the edge of my seat now) 1 last ball from the 44 remaining that could match and win me the big money.
You might remember that when we have probabilities like this, each of which depends on the previous result, we can multiply them together to get the overall probability.
Let’s try that.
That’s one in 13,983,816.
So, I’ve got about a 1 in 14 million chance of winning about 2 million pounds. Hmmm. Doesn’t sound good.
Let’s look at the Euro lottery instead. This is a pick 7 numbers out of 50 draw, and the average jackpot is about £55,000,000.
Applying the same logic as above, we end up with
That’s one in 99,884,400
That makes me about 7 or 8 times less likely to win, but the amount I’m likely to win is over 20 times higher! I can feel my yacht beckoning.
Those numbers are a bit depressing, though. I’m probably going to be sat at my desk doing the Lottery Fantasy Hour until I die. Maybe I need a better approach.
What if I ran a lottery instead of playing it? And made it available to everyone in the office?
Step 1. Ignore all local gambling laws [1]
Step 2. Develop a lottery designed to keep people playing.
[1] This is not legal advice.
Let’s say there are 30 people in our office, and they all opt in at 1-unit-of-local-currency a week. £1, for instance
Over the course of, say, 10 years, we want someone to win every other week (for that roll-over excitement).
At 30 plays a week, 52 weeks a year, for 10 years, that’s 15,600 plays, and we want a win every other week, which is 260 winners. So the odds of a jackpot win want to be somewhat less than which is 1 in 60.
I also own 10 ping pong balls, a sharpie and a black felt bag. So we’re going to be doing a draw of some-number-of-balls from 10.
So – how can I work out what the odds would be for different numbers of balls in the draw?
Let’s remind ourselves of the odds for drawing 7 from 50.
And 6 from 49
Can we see any patterns?
Let’s take the bottom of the fraction first. That’s a function of the number of balls we get to choose – let’s call that number .
Spot test: Can you write out an expression for this function in the form
Answer:
We call this function ‘factorial’, and we usually write it as
Now, let’s look at the numerators. They clearly aren’t just factorials, but they seem to be related.
Because is and a bit big to keep in our heads, we’ll pick a smaller example.
Let’s look at picking three numbers from five.
Again, the bottom is easy – as usual, that’s
What about the top? First, let’s multiply out so we can see what sort of number we’re dealing with.
Now, we know that
But we don’t want all of that – we just want . It is too big, to the tune of a factor of
No problem, we can just divide it through.
And we recognize that is a factorial too, and we end up with
It should be obvious where the 5 comes from – that is just the number of balls we’ve got to pick from.
But how did we get the 2? We want to end up with a number of terms equal to the number of balls we’re drawing. So we need to divide out by the factorial of the total number of balls (which we can call ), less the number of balls to pick (which we have been calling ), or .
In this example, , so, as we expected,
So, if we are picking from , our numerator is always
Now, we can go back and combine our denominator with our numerator to provide the equation that allows us to calculate the probability of winning any draw-k-balls-from-n lottery…
Spot test: can you substitute our factorials back in to our equation
Answer:
We call this the combination function as it tells us the number of ways we can pick k items from a set of n items, if the order of selection does not matter.
Sometimes, you see this k-from-n combination function written down like this:
In this form, we call a binomial coefficient, and read it as “n choose k”. There are loads of applications for this – wherever we need to choose a subset of items from some larger set. Lottery fantasies are just one.
OK, so given a particular number of balls (n), I could use this function to display a table that shows me the odds of winning the jackpot, given a particular number of balls in the draw (k).
In the interests of not getting bored, we can turn this into an F# function:
Something like
let lotteryOdds n k = factorial n / (factorial k * factorial (n-k))
That’s a good start – but it won’t work just yet. We have to implement that factorial function.
One way to do that is to use a very powerful tool called recursion.
Let’s look back at our factorial function again.
What about the expression for Can you write out its expansion in the same way?
They look really similar – in fact:
To explore this further, let’s see if we can write that as a function in F#
Here’s a first effort.
let factorial x = x * factorial (x-1)
Notice that we’re calling the factorial function from within the definition of the factorial function itself! This is what we call recursion.
Unfortunately, if we try that, F# comes back with an error:
let factorial x = x * factorial (x-1);;
----------------------^^^^^^^^^
stdin(3,23): error FS0039: The value or constructor 'factorial' is not defined
In some languages (C++, C# or Java for instance), this wouldn’t be an error, but in F# there’s a special bit of syntax we use to specify that a function can be called recursively. We have to add the keyword rec
.
So here’s our second go.
let rec factorial x = x * factorial (x-1)
OK – F# responds happily with
val factorial : x:int -> int
BUT! Before we call it, let’s work this through on paper for a simple example and see what happens. We’ll write each recursive call on a separate line, and indent so we can see what is happening.
factorial 5 =
5 *
(5-1) *
(4-1) *
(3-1) *
(2-1) *
(1-1) *
(0-1) *
(-1-1) *
...
Oh dear! This going to go on for ever! We need it to stop, eventually. The problem is that whenever we’ve been doing factorials by hand, we’ve stopped before we spill over into the negative integers.
How can we persuade our function to stop?
Well, we’re missing one important fact about factorials. When we get to 0!, we say that it is, by definition, 1, and is not, therefore, defined in terms of the factorial of x-1. This gives our recursion an end. That means our original attempt to define a factorial function was wrong – it should have looked more like this:
let rec factorial x =
match x with
| 0 -> 1
| x when x > 0 -> x * factorial(x-1)
| _ -> failwith "You cannot calculate the factorial of a negative number using this function."
The keyword here is match
- we're going to match x with
a variety of different patterns.
Note that we start each pattern definition on a new (indented) line with the vertical pipe symbol |
. (This kind of looks a bit like our big curly bracket in our mathsy version of the expression.)
As I mentioned, there are a variety of different patterns we can use, and in this function, we use all three kinds. Let's look at each one in turn.
| 0 -> 1
This one is fairly straightforward; we can read it as "if x is 0, then the match goes to 1". We can use this match for any particular value of x. For example, we could hard wire the result for 5! if we wanted, by adding the additional match:
| 5 -> 120
(We won't, though.)
The second match is a slightly more complex expression
| x when x > 0 -> x * factorial(x-1)
We can read that as "for all values of x when x is greater than 0, the match goes to our recursive factorial function call."
What about the last one?
| _ -> failwith "You cannot calculate the factorial of a negative number using this function."
This _
symbol we use to mean "for all other cases", and in this example we're using a special F# function called failwith
which raises an error with a message. Notice that we've put the message in quotation marks - this marks it out as a string - which is a way of representing text in the computer. We'll have more on that later.
Of course, you don't have to use the match keyword in recursive functions alone. In signal processing, there's a thing called a high-pass filter. If the signal is above a certain frequency, then it does nothing, otherwise it attenuates the signal to zero. Think of it a bit like a bass-cut button on your hifi - it leaves the high-frequency sound alone, but trims out the low-frequency signals.
We could write down a function for this:
And then convert this into an F# function
let hipass x xmax =
match x with
| x when x < xmax -> 0
| _ -> x
Let's give that a go. If the signal is 10 and the high-pass filter is set at 5, we get:
hipass 10 5
F# responds
val it : int = 10
Good - so above the threshold our original number is passed through.
Let's try one below the threshold.
hipass 2 5
val it : int = 0
Spot test: What will the response be if I choose a value exactly at the threshold?
Answer:
hipass 5 5
val it : int = 5
Is that what you expected? Notice that the threshold specifies strictly less than, so values at the threshold will be passed through.
Ok, back to our factorial function.
let rec factorial x =
match x with
| 0 -> 1
| x when x > 0 -> x * factorial(x-1)
| _ -> failwith "You cannot calculate the factorial of a negative number using this function."
Let's try it out.
factorial 5
val it : int = 120
So far, so good!
What about
factorial 50
val it : int = 0
Zero? What's happened here? Well, we're back to the problem of representing numbers in computer memory again. Remember that a signed 32 bit integer can store a number up to in magnitude. is approximately - somewhat larger than we can cope with! Larger, even, than a 64 bit integer could represent. In fact, a signed 128bit integer would still be too small. We'd need to double up again to a signed 256-bit integer to cope with a number this large.
(Factorials get really big, really quickly - and that will be important again, later.)
Clearly, we're going to have to look at using a different data type to represent our numbers. F# provides us with a type called bigint
which can be used to represent arbitrarily large numbers (limited, more or less, by the amount of data memory in your system.)
We need to represent this very large number in the output of our factorial function, but we're still happy to pass an integer in as our parameter. Here's a go at defining the function to use bigint
let rec factorial x =
match x with
| 0 -> 1I
| x when x > 0 -> bigint(x) * factorial(x-1)
| _ -> failwith "You cannot calculate the factorial of a negative number using this function."
Try that, and F# responds
val factorial : x:int -> System.Numerics.BigInteger
So this is now a function that takes an int
and returns a System.Numerics.BigInteger
. (That's the fullname for our bigint
type.)
There are two interesting lines in that function definition. First, there's the one where we map the integer 0 to the bigint value 1.
| 0 -> 1I
Notice that we use the suffix I
to indicate that this number should be interpreted as a big integer, rather than a regular 32 bit integer.
The other line of interest is in the factorial function:
| x when x > 0 -> bigint(x) * factorial(x-1)
The value x
is an integer, and we use this syntax bigint(x)
to convert it from an int
into a bigint
. We call this kind of conversion a type cast. We'll have a lot more about types later in the series.
Let's try that out.
factorial 50
val it : System.Numerics.BigInteger =
30414093201713378043612608166064768844377641568960512000000000000
{IsEven = true;
IsOne = false;
IsPowerOfTwo = false;
IsZero = false;
Sign = 1;}
That looks like a very big number. Notice that bigint
also seems to have a bunch of other values associated with it - whether it is even, whether it is a power of two, its sign, etc. We'll learn more about that when we come to explore types.
OK, that's great. We can now calculate the factorial of lottery-sized numbers. But is this recursive technique the best way to do it?
What happens if we try an even bigger number? Our BigInteger should be able to cope with the result, but let's see what happens if we try to calculate
factorial 1000000
Process is terminated due to StackOverflowException.
Ouch.
Next time, we're going to find out why that blew up so spectacularly.
Learning To Program – A Beginners Guide – Part One - Introduction
Learning To Program – A Beginners Guide – Part Two - Setting Up
Learning To Program – A Beginners Guide – Part Three - What is a computer?
Learning To Program – A Beginners Guide – Part Four - A simple model of a computer
Learning To Program – A Beginners Guide – Part Five - Running a program
Learning To Program – A Beginners Guide – Part Six - A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven - Representing Numbers
Learning To Program – A Beginners Guide – Part Eight - Working With Logic
Learning To Program – A Beginners Guide – Part Nine - Introducing Functions
Learning To Program – A Beginners Guide – Part Ten - Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks - Recursion in F#
Exercise 1: Remember the exercises in our first introduction to algorithms? Can you implement functions in F# for the sum of an arithmetic series and the sum of a geometric series?
Remember that this is the formula for an arithmetic series:
where is the difference between each term, is the index of the first term in the progression that we want to include in the sum, and is the index of the last term we want to include; is the value of the first term, and is the value of the last term.
So, for the progression 1, 3, 5, 7, 9, 11
is 2
is 1 (from the 1st term)
is 6 (to the 6th term)
is 1
Is 11
We could define this function in F# as follows:
let arithmeticseries m n am an = (((n - m) + 1) * (am + an)) / 2
We’ve defined it with 4 parameters – a record, and bound it to an identifier called arithmeticseries
(although, of course, we could have picked any name we liked).
What do you think F# will respond for this definition?
val arithmeticseries : m:int -> n:int -> am:int -> an:int -> int
So, this is a function that takes an integer, and returns a function that takes an integer and returns a function that takes and integer and returns a function that takes an integer and returns an integer! A bit of a mouthful, but the same pattern as our “two parameter” function, and no more difficult to deal with!
Let’s try out our function on our example progression (whose sum is, incidentally 1+3+5+7+9+11=36)
arithmeticseries 1 6 1 11
val it : int = 36
Looks good! Let’s move on to the second part of this exercise, the geometric progression.
Remember that a geometric progression is one in which each term is a constant multiple of the previous term: , , , …
For example is a geometric sequence where and its sum is 93.
The formula for the sum of a geometric progression where a is the value of the first term in the sequence, and r is the constant multiplier is:
In the hints for this exercise, we mentioned a function called pown
which raises one value to the power of another. Let’s use that that to translate that formula into the definition of a function for a geometric series.
let geometricseries m n a r = (a * (1 - pown r ((n - m) + 1))) / (1 - r)
F# responds
val geometricseries : m:int -> n:int -> a:int -> r:int -> int
Now we can test it out.
geometricseries 1 5 3 2
val it : int = 93
Did you get that? If so, that’s definitely a moment of triumph. You’ve translated a fairly complex mathematical formula into a neat little function in F#. If not, go back and see how the F# function definition maps on to the mathematical expression. Don’t forget that the pown
function is applied to the parameter(s) immediately to its right. Once you’ve worked it out, take a few seconds to bask in the moment triumph, then move on!
Exercise 2: Another derived Boolean operator is called the equivalence operator. It is true if the two operands are equal, otherwise it is false. First, draw out the truth table for the equivalence operator. Then, work out a compact Boolean expression for it. Finally, implement the equivalence operator as an F# operator.
Here’s the truth table for the equivalence operator (for which we use the symbol )
true | true | true |
false | true | false |
true | false | false |
false | false | true |
Compare this with the truth table for XOR
true | true | false |
false | true | true |
true | false | true |
false | false | false |
Can you see that ?
This gives us a big hint as to how we could implement it – by applying the not operator to our Boolean expression for XOR.
We could write that in F# as:
let (|==|) x y = not ((x || y) && not (x && y))
F# responds as you might expect for a standard “two parameter” function:
val ( |==| ) : x:bool -> y:bool -> bool
(You might have picked a different identifier for your operator, of course – the choice is yours.)
Let’s test that out by reproducing the truth table.
true |==| true
val it : bool = true
true |==| false
val it : bool = false
false |==| true
val it : bool = false
false |==| false
val it : bool = true
So far so good! It works, but it is a little more unwieldy than the XOR definition – we’ve added an extra term. Given that they are so similar, should it not be possible to express equivalence just as succinctly as we did XOR? The answer is yes, but to do that, we need to learn some more of the rules of Boolean algebra.
In regular maths, you’re probably so familiar with the rules of algebra, that you don’t even think about them as being laws at all, just “the way things are”. But there’s nothing magic about them – they’re just rules people have made up to try to create a consistent system of mathematics. Brace yourself. There’s a lot of detail coming up, so take it slowly and experiment with the rules as we come across them.
Two of the most familiar are called associativity and commutativity. Don’t be put off by the names if you haven’t heard them before – you’ll recognize them when you see them. Here’s an example of the law of associativity for addition.
You’re probably thinking “well, obviously!”. We saw a similar example when we were talking about operator precedence. If so, good – this should be obvious!
Spot test: give an example of the law of associativity for multiplication.
Answer:
Now, commutativity. This is the idea that the ordering of the operands doesn’t matter. Here’s an example for addition.
Spot test: give an example of the law of commutativity for multiplication.
Answer:
Again, you’re probably thinking that this is painfully obvious stuff.
OK – so let’s look at something a bit more complicated. What about distributivity? This is the idea that multiplication “distributes” over addition – like this:
Spot test: Does addition distribute over multiplication?
Answer: No, it doesn’t
Another law is called identity. This is the notion that there is some operation that results in the original operand.
Here’s the identity law for addition.
Spot test: what is the identity law for multiplication?
Answer:
One last common law is the annihilator for multiplication. If you multiply anything by zero, you get zero.
Notice how this “annihilates” the term from the result.
We use these rules all the time to help us manipulate algebraic expressions. Remember when we were trying to derive a formula for an arithmetic progression? Amongst other things, we specifically used the fact that we could write the whole expression forwards or backwards, and that this would be equivalent – this relied on commutativity.
In our previous section on Boolean logic, we noted that the Boolean operator is broadly equivalent (in regular algebra) to multiplication, and the operator is equivalent to addition. This similarity holds true for all of these laws, for which there are equivalents in Boolean algebra.
Spot test: can you write out the laws of associativity, commutativity, distributivity, identity and annihilation for the Boolean operators and ?
Answer:
Associativity
Commutativity
Distributivity
Identity
Annihilation
Did you get that lot? Take a moment of triumph! If not, go back and look at the laws for regular algebra, and the equivalence of AND and multiplication, OR and addition, and see if you can work them out.
With practice, these laws of Boolean algebra will become just as ‘obvious’ as their equivalents in regular algebra. As usual, the laws aren’t complicated, but the symbols take some getting used to.
Of course, Boolean algebra is not exactly equivalent to the algebra you already know. It adds a few laws of its own.
First, there’s idempotence. This is the idea that if the inputs to the operator are the same, then the output is the same as the input.
This is very different from the equivalent expressions in regular algebra, where
(So multiplication and division are not idempotent!)
Another law is called absorption. Let’s have a look at the expressions, and you’ll see why it got that name.
It is as if the AND operator “absorbs” the OR expression that follows (and vice-versa).
There’s also a wrinkle with distribution. A minute ago, we saw how in regular arithmetic, multiplication distributed over addition
And in Boolean algebra, the equivalent AND distributes over OR
And while addition doesn’t distribute over multiplication…
In Boolean algebra, OR does distribute over AND
Another place in which Boolean algebra is more symmetrical than the algebra we know and love is annihilation. There is also an annihilator for OR, as well as the one for AND.
Still with me? We’re nearly done; there’s one more set of laws to look at. We’ve covered AND and OR, addition and multiplication, but we’ve not yet had anything to say about negation.
As usual, the laws of multiplying and adding with negation in regular algebra are so familiar they seem to be stating the obvious.
The first is the familiar “two negatives make a positive” rule for multiplication, the third is “double negation”. The second tells us that adding together two negated values is equivalent to adding together the two values and negating the result.
But in Boolean logic, the basic rules are a bit different, and called complementation.
We can also write down a couple of rules derived from these laws of complementation called de Morgan’s laws. These are really interesting because they allow us to express the AND operator purely in terms of the OR operator and negation; and vice-versa.
Now – let’s go back to where we started and look at our expression for the equivalence operator.
If we say
and
Then we could rewrite this as
But looking at de Morgan’s laws above, we can see that this is equivalent to
Now
and
We know the rule for double negation – it is one of our complementation rules above. So it follows that
We can substitute this back into our equation
Remember our original expression for XOR?
We can use the law of commutativity to swap our expression for equivalence into the same form:
This is clearly simpler than the original form, and we say that we have minimized the expression.
When you get used the laws, you could have done this in one quick step; you’d remember de Morgan’s laws, negate the two terms either side of the central AND operator, and flip the operator from AND to OR. This is probably the most common day-to-day Boolean minimization you’ll carry out on real-world expressions.
Let’s check that it is still correct by implementing it in F#.
Spot test: Can you implement this new form for the equivalence operator in F#?
Answer:
let (|==|) x y = (x && y) || not (x || y)
F# responds with
val ( |==| ) : x:bool -> y:bool -> bool
So the form of our operator is, of course, still correct. Let’s check that it does what we expect!
true |==| true
val it : bool = true
true |==| false
val it : bool = false
false |==| true
val it : bool = false
false |==| false
val it : bool = true
There are various systematic approaches to minimization – from the repeated application of these laws of algebra, to something called a Karnaugh Map.
One simple way to minimize a function is to apply the law of complementation. Specifically, you look to rearrange any Boolean expression to generate terms that look like this:
Those terms can then be immediately eliminated.
For example, consider this slightly brain-bending expression:
We can rearrange this to something much simpler
No – really; we can! First let’s say
We can then rewrite the first expression as
Applying our law of commutativity on each term, this is the same as
We recognize this as an example where our distribution law applies, so that becomes
And since
This becomes
Substituting the value of a back in, we get
Which is just
We went from the brain bending to the simple in a few sort-of-easy steps.
Let’s check that they’re actually the same, using F# to build the two truth tables. First for the complex expression.
Spot test: create an F# function to implement , and then build the truth table for the expression.
Answer:
let expression1 x y z = (x && y && not z) || (not x && y && not z)
x | y | z | |
1 | 1 | 1 | 0 |
1 | 1 | 0 | 1 |
1 | 0 | 1 | 0 |
1 | 0 | 0 | 0 |
0 | 1 | 1 | 0 |
0 | 1 | 0 | 1 |
0 | 0 | 1 | 0 |
0 | 0 | 0 | 0 |
Notice how the results in the truth table don’t depend on the value of x at all – this is a good sign that we can eliminate x entirely.
Let’s try our second expression
Spot test: create an F# function to implement , and then build the truth table for the expression.
Answer:
let expression2 y z = y && not z
y | z | |
1 | 1 | 0 |
1 | 0 | 1 |
0 | 1 | 0 |
0 | 0 | 0 |
Success!
This is an excellent question. You’ll often hear arguments that it is “more efficient” to minimize the expression – but for anything but the most extraordinary expressions (or implementation in discrete electronic components) this is a bit of a side issue.
The usual reason to minimize is to make it more comprehensible. Human brains are not great at double negatives and extremely long chains of reasoning, so a compact expression is generally more understandable.
However, this can also be a good reason not to minimize. If you have well-defined clauses (wrapped in parentheses and indented neatly) that mean something obvious individually, then it may be better to leave them un-minimized.
In the case of the equivalence operator, you might make the argument that the expanded version made it clear that it was the complement of the XOR operator – but I think that argument is a little weak. The minimized version is simple to read, and the two expressions are recognizably similar. When you get used to Boolean algebra, you will also recognize that they have the form of complementary expressions: they are otherwise identical, but all the ANDs and ORs are swapped around.
In the second example, the benefits of minimization were clear – we eliminated one entire variable, and made the expression much simpler to read.
So, the general rule for minimization is to make it as compact and understandable as possible.
OK, that was a lot of information. There’s no substitute for some practice with this stuff. For most developers, the rules of logic eventually become second nature, just like the ones for regular algebra. (So much so, that they often forget that they’ve learned them!)
With that in mind, here are a couple of exercises. The answers are at the bottom of the page.
Exercise 1: Minimize the following Boolean expressions
a)
b)
c)
Exercise 2: Implement F# functions for the expressions above (both minimized and as originally stated), and verify their truth tables.
Learning To Program – A Beginners Guide – Part One – Introduction
Learning To Program – A Beginners Guide – Part Two – Setting Up
Learning To Program – A Beginners Guide – Part Three – What is a computer?
Learning To Program – A Beginners Guide – Part Four – A simple model of a computer
Learning To Program – A Beginners Guide – Part Five – Running a program
Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven – Representing Numbers
Learning To Program – A Beginners Guide – Part Eight – Working With Logic
Learning To Program – A Beginners Guide – Part Nine – Introducing Functions
Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#
Exercise 1
a)
b) , or alternatively,
c)
Exercise 2
a1) let exa1 x y z = (x || not y) && (z && not y)
a2) let exa2 y z = not y && z
b1) let exb1 x y z = not ((x || y) && (z || not y))
b2) let exb2 x y z = (not x && not y) || (y && not z)
or (not x || y) && (not y || not z)
c1) let exc1 x y z = (x && not y) || not ((x || y) && (z || not y))
c2) let exc2 y z = (not y) || (not z)
We’re going to build on that this time, so it might be a good idea to go over the key points:
1) A function takes exactly one input (parameter) and produces one output (result). We can write this as
2) We can bind a function to an identifier using the let keyword.
let increment x = x + 1
3) We can define a function (with or without a binding) by using a lambda
fun x -> x + 1
4) A function is applied to the value to its immediate right
increment 3
5) A function doesn’t have to return a simple value; it can return a function too
let add x = fun y -> x + y
6) Applying 4) and 5) allows us to create functions which appear to take multiple parameters. F# has shorthand syntax to help
let add x y = x + y
add 2 3
7) We can still capture the intermediate function, effectively binding one of its parameters. We call this ‘currying’
let add2 = add 2
Finally, we left off with an exercise.
Exercise: Create a function that applies the logical XOR operator we worked out in the previous section.
Remember that we learned that the derived operator XOR can be constructed from AND, OR and NOT operators like this:
You will probably also remember that the F# symbol for the logical AND operator is &&
, OR is ||
and NOT is not
. Knowing what we do about functions, we can define a function for the XOR operator.
Spot test: Define a function bound to the identifier xor
which implements the XOR operator.
As usual, give it a go yourself before you look at the answer. Check it out in your F# environment.
Answer:
let xor x y = (x || y) && not (x && y)
Try that, and F# responds
val xor : x:bool -> y:bool -> bool
So, we have defined a function bound to an identifier called xor that takes a boolean, and returns a function that takes a boolean and returns a boolean – our usual pattern for a function that “takes two parameters”.
We can now make use of this to build the truth table for XOR, tidying up a loose end from our section on logic.
xor false false
val it : bool = false
xor true false
val it : bool = true
xor false true
val it : bool = true
xor true true
val it : bool = false
That’s a good start, but it doesn’t look quite right. We’re calling our xor
function in the standard way: applying the function to the value to its right (or prefix syntax). But the similar operators ||
and &&
appear between the parameters, which we call infix syntax.
F# provides us with a means of defining a special kind of function called, unsurprisingly, an operator, which works in just this way.
Defining an operator is just like defining a function – with a couple of little wrinkles.
The first wrinkle is the name – the name has to consist of some sequence of these characters:
!
, %
, &
, *
, +
, -
, .
, /
, <
, =
, >
, ?
, @
, ^
and |
The second wrinkle is to do with operator precedence. You’ll remember in the section on logic that we discussed how multiplication and division take precedence over addition and subtraction, and that logical AND takes precedence over logic OR. The precedence of a custom operator that we define is determined by the characters we use in its identifier. This can be a bit tricky to get used to!
For XOR we want a name that reminds us of the XOR symbol but which takes the same kind of precedence as OR. Let’s use |+|
. It has got the pipe characters of OR, along with a plus symbol, so it looks vaguely similar.
So – how do we define an operator? As you might expect, the syntax is very similar to a function:
let ({identifier}) x y = {function body}
And here’s how we might define our XOR operator:
let (|+|) x y = (x || y) && not (x && y)
Just like a regular function binding to an identifier, except that we’re wrapping the identifier in parentheses (round brackets).
Spot test: What do you think F# will respond?
Answer: This is basically just our standard “two parameter” function pattern, so you’d expect a function that takes a boolean, and returns a function that takes a boolean and returns a boolean. And that’s just what we get. Notice that the round brackets are still shown around the identifier.
val ( |+| ) : x:bool -> y:bool -> bool
Now, though, we can try out our infix XOR operator.
true |+| false
val it : bool = true
true |+| true
val it : bool = false
So, now we know how to define functions and operators, and we’re armed with a basic knowledge of logic, we can go on to try to solve some more complex problems. But first, a couple of exercises.
Exercise 1: Another derived boolean operator is called the equivalence operator. It is true if the two operands are equal, otherwise it is false. First, draw out the truth table for the equivalence operator. Then, work out a compact boolean expression for it. Finally, implement the equivalence operator as an F# operator.
Hint: What is the relationship between the equivalence operator and the exclusive or operator?
Exercise 2: Remember the exercises in our first introduction to algorithms? Can you implement functions in F# for the sum of an arithmetic series and the sum of a geometric series?
Hint: It is probably useful to know that, in addition to +
and -
, F# uses /
for division and *
for multiplication. These are all infix operators. There is also a function called pown
which is of the familiar “two parameter” prefix style, and raises one value to the power of another. Here’s , for example:
pown 2 3
val it : int = 8
(Answers will be at the start of next week’s instalment)
Learning To Program – A Beginners Guide – Part One – Introduction
Learning To Program – A Beginners Guide – Part Two – Setting Up
Learning To Program – A Beginners Guide – Part Three – What is a computer?
Learning To Program – A Beginners Guide – Part Four – A simple model of a computer
Learning To Program – A Beginners Guide – Part Five – Running a program
Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven – Representing Numbers
Learning To Program – A Beginners Guide – Part Eight – Working With Logic
Learning To Program – A Beginners Guide – Part Nine – Introducing Functions
Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#
This is where functions come in.
A function is the smallest useful unit of program, that takes exactly one input, and transforms it into exactly one output. Here’s a block diagram to represent that.
(It’s worth noting that when we’re talking about inputs and outputs in the context of a function, these aren’t the I/O operations of our block diagram back in the simple model of a computer – the keyboards, monitors, printers and so forth. The input is just a parameter of the function and the output is its result.)
As usual, there are lots of ways of describing a function. You might have seen this block diagram expressed in a more compact mathematical form:
You can read that as ‘x’ ‘goes to’ ‘f-of-x’. Mapping that on to the diagram above, you can see that x is the input parameter, f is our function, and f(x) is our output – the result of applying the function f to x.
Let’s get more specific. How could we describe a function whose result is the value of the input, plus 1?
So, as you might expect, you can read that as ‘x’ ‘goes to’ ‘x plus 1′. And by observation, we can say that
So, that’s the block diagram and the general mathematical form. What about some code?
Different programming languages have all sorts of different ways of defining a function. We’re going to focus on F# for our examples. The syntax may vary from language to language, but the principles are the same. If you learn the principles, you can apply that knowledge to any code you come across.
It’s not just different languages that introduce some variety in to the way you can define a function, though. There are lots of ways of defining a function in F#, depending on the context. We’re going to avoid some of that detail, for the time being, and start out with a simple example, where we define a function and bind it to an identifier, so that we can call it as often as we like.
There’s a lot of seemingly innocuous English words in that last sentence, like ‘call’ and ‘bind’. But what does it actually mean?
We say that we call a function when we give it an input, and ask it to evaluate the output for us.
An identifier is a named value, and when we bind a function (or other value) to an identifier, we associate the name with that function or value (forever!).
Here’s a simple example of a binding in F#. You can start up your F# runtime environment (fsi
or fsharpi
) and try it out.
let x = 3
(don’t forget to type ;;
when you want the runtime to evaluate your input.)
So, the syntax for a binding is as follows:
let {identifier} = {value}
F# responds with
val x : int = 3
You can read the result as ‘the value called x is a 32-bit integer which equals 3′.
What about this notion of a binding being forever? Let’s experiment with that by binding 3 to the identifier y, and then trying to bind 4 to that same identifier.
let y = 3
let y = 4
We want F# to evaluate both of these lines as a block, so we type the first line, press return, and type the second line, before typing our usual ;; to kick off the evaluation.
F# responds with
let x = 4
----^
stdin(4,5): error FS0037: Duplicate definition of value 'x'
This tells us that F# is not happy. Our second attempt to bind the identifier x was an error. It has even drawn a handy arrow to the bit of our statement that was wrong. You’ll get very familiar with F# errors before we’re done!
We don’t have to bind a simple value to an identifier, though, we can bind a function, too.
Remember that when we bind a simple value to an identifier, we use the syntax
let {identifier} = {value}
But to bind a function to an identifier, we need to include a name for the input parameter too, so we use the syntax
let {identifier} {parameter} = {function body}
Let’s try it:
let increment x = x + 1
F# responds with:
val increment : x:int -> int
We can read that as ‘the value called increment is a function which takes a 32-bit integer (called x, as it happens), and goes to a 32bit integer.’
We can then use the function. F# applies a function (increment
, in this case) to the value immediately to its right, which it uses as its input parameter.
increment 3
F# responds:
val it : int = 4
Spot test: We learned how to read that kind of response in the previous section. What does it mean?
Answer: The result (‘the value called it’) is a 32 bit integer which equals 4.
So far so good! We’ve created our first function.
A function doesn’t have to map its input to a number, though. One thing we could return from a function is another function. Let’s have a look at an example of that.
let add x = fun y -> x + y
If we execute that line, F# responds with:
val add : x:int -> y:int -> int
Can we read that? The value called add is a function which takes a 32 bit integer called x, and returns a function which takes a 32 bit integer called y, which returns a 32bit integer.
It’s a bit long winded, but it’s quite straightforward if you follow it through, carefully. Read it a couple of times and see how it matches up with the F# response above.
Hang on a minute, you may be thinking. How exactly did we define the function that was returned? Let’s remind ourselves of the syntax for binding a function to an identifier again.
let {identifier} {parameter} = {function body}
In this case, the function body itself returns a new function. By inspection, this must be the code which defines the new function that form the result:
fun y -> x + y
First, you can see that we are clearly not binding this function to an identifier. There’s nothing to say we couldn’t – we just haven’t. Secondly, we’re using a different syntax to define the function. We call this new syntax a lambda.
Compare it with the maths-y way of defining a function we looked at right at the beginning of this section
It’s remarkably similar, but, as there isn’t an key on our computer, the designers of F# have used ->
as a cute replacement. We also add the prefix fun
to tell F# that we are starting to define a function.
So, how do we read it?
fun y -> x + y
This means ‘a function which takes a parameter called y, and goes to some value x plus y’.
This seems to be a bit lacking in information. Where do we get the x from, for a start?
Remember that our whole definition was:
let add x = fun y -> x + y
We say that the lambda function is being defined in the scope of the definition of the add function, or that the add function is the parent (or an outer) scope of the lambda (which is, conversely, a child or an inner scope of the add function). You can think of scopes like a series of nested boxes. An inner scope has access to the identifiers defined in any outer scope which contains it, but an outer scope cannot reach into an inner scope and rummage in its innards.
So, our lambda gets its x value from the outer scope – the parameter to the add function.
And what about an identifier for this function? Well, it doesn’t have one. We’ve not bound it to an identifier anywhere. We call this an anonymous function. Why doesn’t it have a identifier? Well, it doesn’t really need one. As we said above, there’s no way to ‘reach inside’ the body of the function and fish it out, so we don’t need to bother binding it to an identifier when we define it.
As we mentioned before, there’s nothing to stop us binding it to an identifier along the way, if it would make our code clearer. Here’s an example of that
let add x =
let inner = fun y ->
x + y
inner
Notice how the function body now extends over several lines, so we’ve had to indent it with spaces to tell the F# compiler that you should treat this all as a single block. If you look back up at our box diagram for the scopes, you can see that it looks quite similar – we just haven’t drawn the boxes! We’re still creating a lambda, but binding it to an identifier called inner, and then returning the value of that identifier, rather than the returning the lambda directly.
Quick note: You might be tempted to use a tab to indent it neatly- that won’t work. F# requires that you use spaces. If you’re using an editor to write your F# code, make sure you have set it to “convert tabs to spaces” mode, and then you can hit the tab key to indent.
That’s a lot more verbose, and, in such a simple example, not really any clearer, so we’ll prefer the definition in our original form.
let add x = fun y -> x + y
In earlier sections, we talked about the fact that the choices you make when you choose how to implement an algorithm can influence the cost of that algorithm quite significantly – be that in terms of program length, memory consumption or computational effort. But there’s another way in which your choice of implementation can affect the system, and that’s in how easy it is to understand.
We may spend a long time carefully crafting some code, but if we come back to that code later and can’t work out how it works by looking at it, then we may misinterpret what it does, or how to set it up, or what its constraints might be. This is a rich source of errors in our programs! So, unless there is some absolutely critical reason why we shouldn’t (and there almost never is), we prefer to use short, simple functions with a minimum of extraneous detail, that do exactly one thing, and without side-effects in the rest of the system.
Ok so we’ve created a function that returns a function. What earthly use is that? Well, one way we can use it is to create a whole family of functions.
Try the following
let add1 = add 1
Spot test: What do you think F# is going to respond? Try and work it out before you look below.
Hint: We’re binding something to an identifier called add1, and the body of the binding is a call to the add function we just defined, where the value passed as the parameter called x is 1. Remember that the add function returns another function.
Answer:
val add1 : (int -> int)
We know that we can read that as “the identifier add1 is bound to a function that takes a 32bit integer, and returns a 32bit integer”.
Let’s try another
let add2 = add 2
What’s F# going to respond?
val add2 : (int -> int)
Clearly, we could do this for any arbitrary integer we wanted.
And how do we use any of these functions we’re creating? Well, add2
is an identifier bound to a function which takes an integer, and returns an integer, so there’s no magic to that.
Spot test: How would we use this function to calculate 2 + 3?
Answer:
add2 3
val it : int = 5
Spot test: What is 3657936 + 224890?
Hint: Don’t use your calculator! Use our fabulous integer addition function factory!
Answer:
let add3657936 = add 3657936
add3657936 224890
F# responds with
val add3657936 : (int -> int)
val it : int = 3882826
In fact, we can leave out that intermediate function binding entirely.
Try the following:
add 3 4
F# responds with
val it : int = 7
Excellent! But how did that work?
Well, remember that F# applies a function to the value immediately to its right, which it uses as the parameter to the function.
First, it applied the add
function to the value to its right (the 32bit integer 3) and that, as you know, returns another function. So it took that resulting function and applied it to the value to its right (the 32bit integer 4), resulting in our answer: 7.
This is a very interesting result. The net effect is that we can add any two numbers together, even though any given function can only take one input, and produce one result, by using a function that returns a function.
This is such a useful pattern (and writing it all out by hand is a bit of a drag) that F# gives us a shorthand.
Instead of
let add x = fun y -> x + y
We can type
let add x y = x + y
F# responds
val add : x:int -> y:int -> int
Notice that this is exactly the same as our original definition – the value called add is a function that takes a 32bit integer and returns a function that takes a 32bit integer and returns an int.
And we can still type
add 3 4
to get
val it : int = 7
In this new shorthand syntax, we can just think of the function as taking several parameters – two in this case – but we know that what is really happening under the covers is that we are creating a function that takes the first parameter, and returns a function that takes the second parameter (and uses the first parameter, from the outer scope, in its body).
And if we need to, we can still capture that intermediate function, with its first parameter bound to a particular value.
let add342 = add 342
val add342 : (int -> int)
We call this binding of a subset of the parameters of a function currying and we’ll learn more about that in a later section.
Exercise: Create a function that applies the logical XOR operator we worked out in the previous section.
In the next section we’ll look at the answer to that exercise, and how we can make the function work more like the other logical F# operators we’ve already used.
Learning To Program – A Beginners Guide – Part One – Introduction
Learning To Program – A Beginners Guide – Part Two – Setting Up
Learning To Program – A Beginners Guide – Part Three – What is a computer?
Learning To Program – A Beginners Guide – Part Four – A simple model of a computer
Learning To Program – A Beginners Guide – Part Five – Running a program
Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven – Representing Numbers
Learning To Program – A Beginners Guide – Part Eight – Working With Logic
Learning To Program – A Beginners Guide – Part Nine – Introducing Functions
Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#
One of the most common questions that has come from that post is “how do I achieve a section with a full-width bleed (e.g. for a full-width background), part way down a page?”
Something that looks roughly like this:
We’re going to deal with that now. Here’s the basic recipe:
That’s the secret sauce.
This is the extra slice of cheese.
This is the dangerous amount of jalapeno you sneak in under some shredded lettuce.
If you’re looking for a full-width background, for example, then you want to nest containers like this to provide a responsive pseudo-fixed-width container (which will appear inline with the rest of your responsive pseudo-fixed-width content), embedded within a full-width responsive background layer.
By adding multiple <container>
elements, you can mix-and-match full width and pseudo-fixed-width sections, up and down the page.
Of course, in any container, you can also throw in some completely custom HTML – you don’t need to follow the grid system at all if that doesn’t suit.
]]>In this section, we’re going to look inside these kinds of statements, and focus on the bit that is just about truth and falsity. As usual, the approach will be to learn how to refine a statement from regular language into a rigorous, precise and repeatable form that is useful to a computer, but captures the real-world intent. Ultimately these kinds of statements underpin all of the decision making processes in our programs.
The notion of truth and falsity has two important characteristics for digital computers.
First, it is definite – there is no room for doubt or ambiguity. A statement is either true, or it is false. It can’t be a bit true. Or sort-of-false, if you look at it in a certain way. All of those grey areas in real life go out of the window, and you are left with TRUE and FALSE.
Second, since it considers only two values – TRUE and FALSE – this lends itself to being represented by a transistor in its ON and OFF states. By convention, TRUE is conventionally represented by ON (a bit with a 1 in it) while FALSE is represented by OFF (a bit with a 0 in it). But we’re skipping ahead a little.
In regular English language, propositions are usually bound up with conditionals of some kind, and are frequently used in combination with one another.
“If you’ve cleaned your bedroom and you’ve done the washing up, then you can go out and play.”
“If virtuacorp shares reach 1.30 or they drop below 0.9, then we’ll sell.”
The conditional part of both of these sentences is the if – then wrapper around the proposition.
Between the if and then, each of the sentences above actually contains two propositions, which I’ve highlighted in green and blue to distinguish them from each other.
In each case, the two propositions are combined by an operator (highlighted in purple). In the first case this operator is the word and; in the second case it is the word or.
Our understanding of ordinary language tells us what these operators do: the first (and) means that the whole proposition is true if and only if both propositions are true. The second (or) means that the whole proposition is true if either proposition is true.
You’ll notice that each operator has the effect of combining the two propositions on either side of it into a single result. (We call these binary operators because they have two operands. You may remember that we’ve seen the word operand before!) We’re very familiar with this kind of operator in everyday maths.
The add operator in regular maths is a binary operator; it takes the expressions on either side of it, and combines them to form a result.
The multiplication operator is another binary operator; it also takes the expressions on either side of it, and combines them to form a result.
Spot test: Can you name two other binary operators you’re very familiar with?
Answer: There are several you could pick – the most obvious are probably subtraction and division.
If you said negation – then yes, that is an operator, but it is not a binary operator. It operates on only one value: the value that is to be negated, so it is called a unary operator.
You’ll also notice the multiplication and addition operators appear between their operands, so we often call them infix operators, whereas the negation operator appears before its operand, so we call it a prefix operator.
So, we can recognize binary and unary operators in regular maths. What about our logical operators, and and or? Can we write those natural language expressions in a more formal way, too?
Let’s call the two propositions in the first statement (“you have cleaned your bedroom”) and (“you have done the washing up”), and the overall proposition .
We can then write the statement “you’ve cleaned your bedroom and you’ve done the washing up” like this
We’ve just added to the list of weird symbols we will one day take for granted. We don’t bat an eyelid at for ‘plus’ or for ‘multiplied by’ (or, indeed ‘p’ for “a curious plosive noise we make by violently forcing air through our tightly closed lips as we open our mouth”). This new one is and it means ‘and’.
This expression, then, just means that if is true, and is true, then is true, otherwise is false.
Similarly, if is (“virtuacorp shares reach 1.30″) and is (“virtuacorp shares drop below 0.9″), then we can write the statement “virtuaCorp shares reach 1.30 or they drop below 0.9″ as
This means that If is true, or is true, then is true, otherwise is false, and the symbol represents ‘or’.
As well as the binary operators and and or, there is a logical unary operator called not which is a part of the family. It has the effect of making a true proposition false and a false proposition true.
If is (“virtuacorp shares have reached 1.30″) then the statement (“virtuacorp shares have not reached 1.30″) can be represented by
Here are the two families of operators we’ve seen so far – the familiar ones from everyday maths, and their logical equivalents.
MULTIPLY | ADD | NEGATE |
AND | OR | NOT |
One way to express what they do is to draw up truth tables for the operators. Given the truth value of each of two propositions, it shows the result of applying a given operator to them.
Here’s the truth table for the operator (AND). We write the two operands for the operator in the first two columns, and the result in the 3rd column.
False | False | False |
False | True | False |
True | False | False |
True | True | True |
Spot test: Can you write out the truth table for the operator? Remember that the result is true if either proposition is true. Give it a go before you look at the answer below.
Answer: Here’s the truth table for the operator (OR).
False | False | False |
False | True | True |
True | False | True |
True | True | True |
Spot test: what about the truth table for the operator? Remember that it is a unary operator, so it only has one operand. Again, give it a go before you look at the answer below.
Answer: Here’s the truth table for the operator (NOT).
False | True |
True | False |
As we mentioned earlier, computers aren’t great with complex notions like truth and falsity.
However, they are good with the numbers 0 and 1.
If we use 1 to represent true, and 0 to represent false, we can write our propositions in a way that makes them easier for a computer to understand.
So, if, for example and , then it follows that
We call this Boolean algebra, named for George Boole, who was a 19th Century British mathematician.
We can write out truth tables for these Boolean operators in exactly the same way as we could for our propositions earlier.
Spot test: for which operator is this the truth table?
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 1 |
Answer: (AND)
It turns out that the set of values and the three operators are all we need to construct any Boolean expression we care to think of.
What happens if we try to compose several binary operators into a more complex expression?
Let’s start out (as usual) by looking at this in the familiar world of every day maths
That just means add x and y then add z to get the result. But what about this?
Do we multiply x by y, then add z, or multiply x by the result of adding y and z?
We sort this out by applying a convention called operator precedence. By convention, negation takes precedence over multiplication and division, which take precedence over addition and subtraction. So, we would understand the previous equation to mean:
This shows us another good (often better!) way of sorting out what we mean when we write down an expression full of operators. We use parentheses (this just means “round brackets” – and is the plural of parenthesis) to indicate which parts of the sum we should calculate as a unit. This can avoid a lot of confusion and helps the reader a lot. It is quite clear that is not the same as
Logical operators compose in exactly the same way. NOT takes precedence over AND, which takes precedence over OR.
So
The three logical operators we’ve already seen are all you need to construct a Boolean algebra. However, some special combinations of operators are so useful that we give them names.
Let’s think about the light in a stairwell for a minute. It has two switches, one at the top, and one at the bottom. Both switches are off, and so the light is off. I want to go upstairs to bed, so I flip the switch at the bottom to “on”, and the light comes on. I trudge up the stairs, trying not to spill my bedtime cocoa (or gin and tonic or whatever), and blearily flip the switch at the top to “on”. The light goes off. In the cold, winter’s morning, I awake, bright eyed and ready for a day’s work, and leap out of bed to head downstairs for a cup of tea (or gin and tonic or whatever). Being a winter’s morning, it is still dark, so I flip the switch at the top of the stairs to “off”. The light comes on. I bound downstairs two at a time, flip the switch at the bottom to “off” and the light goes off.
We will not look any further into the grim details of my day, but draw out a table describing the states of the switches and the lights throughout that story.
Switch 1 | Switch 2 | Light |
Off | Off | Off |
On | Off | On |
On | On | Off |
Off | On | On |
(Off) | (Off) | (Off) |
This looks an awful lot like a Boolean truth table
x | y | z |
0 | 0 | 0 |
1 | 0 | 1 |
1 | 1 | 0 |
0 | 1 | 1 |
If both operands are different, then the result is 1. If they are the same, the result is 0. We call this exclusive OR, or sometimes XOR, and it is denoted by the symbol .
However useful this may be (we’ve already seen a real, practical example of its use in the light switches) – it is not one of our primitive operators. Instead, you can construct it as a combination of the operators AND, OR and NOT.
Let’s try an exercise now.
Exercise: Write a Boolean expression using AND, OR and NOT that is equivalent to XOR.
To help you work out the answer to the exercise we’re about to try, we can use the interactive programming environment that we installed in our setting up section to experiment with logic.
The setup instructions for this F# environment are here.
Start it up by opening a command prompt / console and typing fsi
(Windows) fsharpi
(Linux/Mac)
F# understands about Boolean values and operators. It calls the values true
and false
, and the AND operator is &&
, the OR operator is ||
(you’ll probably find this pipe character near the left shift key, or near the return key on the right of the keyboard, but your keyboard layout may vary) and the NOT operator is the word not
.
Let’s give this a try. First, we could try
Type
not false;;
and press return.
(Remember that ;;
at the end of the line tells the F# interpreter that we’re done with our input and it should execute what we’ve typed.)
F# responds with the following:
val it : bool = true
You can read that as: “the resulting value (‘it’) is a bool and that value is true”. Which is exactly what we’d expect from this .
Let’s try a binary operator – AND, say. What’s the result of ? In F#, the AND operator is represented by &&
, so we can type:
true && false;;
F# responds:
val it : bool = false
which we read as “the resulting value (‘it’) is a bool and that value is false”
Spot test: What about OR. What’s the result of ? OR is represented by ||
in F#. So what will you type, and what will the F# runtime’s output be? As usual, work it out, try it in F#, then check the answer here.
Answer:
true || false;;
val it : bool = true
which we read as “the resulting value (‘it’) is a bool and that value is true”.
Do try this out before looking at the answer. It might take you some time, but don’t worry – work it through carefully, and you’ll get there in the end.
You’ll be doing a lot of this in real, day-to-day programming jobs wherever you are in the stack, and you want to train your brain to think in this way.
One problem with most modern programming is that we have to do an awful lot of donkey work setting up the environment we’re working in, preparing data to match the requirements of some 3rd party code we have to integrate with, or dealing with the operating system, programming language and runtime (more on those later), laying out forms or rendering the visuals our User Experience and Design team have lovingly prepared for us. So much so that for many programmers, this seems to be all of the job.
Faced with the need to get some visual to animate in to a web page, they read a book, or search for a blog , and find some code that (pretty much) does the job, with nice step-by-step instructions on how to get it into their application. They bookmark the page, and that tool or code sample becomes part of their development armoury. When they see a problem like that one, they reach for that tool. They’re often productive and they get the job done. (Not inventing everything from scratch is a really important part of programming – we’re always building on other people’s skills and experience.)
However, they often don’t really understand why that code did the job for them, or what the constraints were, or under what circumstances it might fail.
And when faced with a knotty piece of business-driven logic like the examples we’ve seen (even something as simple as a pair of light switches – and real business logic is often much more complicated than this) they don’t have the discipline, experience or tools to analyse it to a sufficient level of detail even to get the basic logic right – let alone think about the edge cases. And that’s one of the primary sources of bugs in our systems.
We all get sucked into this way of working from time to time – pressure to deliver often leads us to take supposed short-cuts, and hack our way through the problem to some kind of working solution. It is very tempting to take a working example from the web and hammer it in to our application, without taking the time to go back (at some point) and really understand what it does. But that approach usually comes back to bite us later on.
This whole course is about diving into the craft of programming and starting the long journey to really understand (at some level) what we do when we write programs. That’s why I’m encouraging you to do the exercises, and not just read the question, and then the answer, and move on.
Also, the people who really understand this stuff, and come in and analyse the mess other people have made of their logic, or advise people how not to get in a mess in the first place, get paid a heck of a lot more than the people doing the day-to-day grind, and (probably) have much more fun to boot. So there are incentives for this investment of your time and brain-power!
OK, you can get back to working on the exercise, now that you’re armed with something that lets you quickly test your efforts.
Exercise: Write a Boolean expression using AND, OR and NOT that is equivalent to XOR.
Answer:
Probably the easiest way to approach this problem is through the truth table.
Let’s remind ourselves of the truth table for XOR.
0 | 0 | 0 |
1 | 0 | 1 |
0 | 1 | 1 |
1 | 1 | 0 |
Now, if we discard the rows that produce a false result:
1 | 0 | 1 |
0 | 1 | 1 |
Looking at the table above, we build terms for each row by ANDing together the operands. If there is a 1 in the column, we just take the operand itself. Otherwise we take its complement.
So, in this case our two terms are:
term | ||
1 | 0 | |
0 | 1 |
We now OR those terms together to produce the result.
This is sometimes called a sum of products approach (thinking about the relationship of OR with addition, and AND with multiplication).
Another way of doing it would be the product of sums approach. In this case, we only look at the false columns in the truth table.
1 | 1 | 0 |
0 | 0 | 0 |
As the name implies, this time we build rows by ORing together the operands.
term | ||
1 | 1 | |
0 | 0 |
Then we AND together those terms to produce the result.
If you didn’t know about the truth-table technique, we could also look at the expression in words “it is true if (x or y) is true, but not if (x and y) is true
This leads us to yet another expression:
So many possible expressions for the same result! In a later section, we’re going to look at the rules of Boolean algebra that let us transform from one to another, and find a suitably compact form.
Let’s pick one and prove to ourselves that it works by trying a simple example for in F#.
This can be expressed as:
(true || false) && not (true && false);;
That produces the result:
val it : bool = true
So far, so good, but we really need to produce the whole truth table. It will be a bit boring typing that whole expression out every time, so in the next section, we’ll learn how to use F# to ease the pain for us.
Learning To Program – A Beginners Guide – Part One – Introduction
Learning To Program – A Beginners Guide – Part Two – Setting Up
Learning To Program – A Beginners Guide – Part Three – What is a computer?
Learning To Program – A Beginners Guide – Part Four – A simple model of a computer
Learning To Program – A Beginners Guide – Part Five – Running a program
Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven – Representing Numbers
Learning To Program – A Beginners Guide – Part Eight – Working With Logic
Learning To Program – A Beginners Guide – Part Nine – Introducing Functions
Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#
So far, we’ve assumed we know what “store 1 in the memory location at offset 4″ means. If we’ve told it to store 1 at offset 4, when we read back the value at offset 4, we get 1 as expected.
We know, then, that a memory location can hold a whole number greater than or equal to zero (the “positive integers”). Is there a maximum number that we can hold in that location?
Let’s write a short program to find out. This uses a loop like the one we saw in the previous exercise to increment r0 all the way up to 255. When r0 reaches 255, we end the loop and proceed to add another 1 to the result, and write it back to the memory at offset 0.
(Don’t forget that you can get the tool we use to run these “assembly language” programs from here. The setup instructions for this F# environment are here.)
load r0 0 ; set the result to 0
add r0 1 ; add 1 to the result (<--- jump back here)
compare r0 255 ; is the result 255?
jumpne -2 ; if not, jump back up 2
add r0 1 ; add another 1 to the result (255 + 1=??)
write 0 r0 ; write the result to the memory at offset 0
exit
If you run this, you'll see R0 increase one-by-one, until it reaches 255. Then the final few lines of the output are as follows:
compare r0 255
R0=255 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=3, FL=0
0 0 0 0 0 0 0 0 0 0
jumpne -2
R0=255 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=4, FL=0
0 0 0 0 0 0 0 0 0 0
add r0 1
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=5, FL=0
0 0 0 0 0 0 0 0 0 0
write 0 r0
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=6, FL=0
0 0 0 0 0 0 0 0 0 0
exit
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=65535, FL=0
0 0 0 0 0 0 0 0 0 0
You can see that when we reached the value 255 in R0, the comparison set the FL register to 0, so the JUMPNE instruction did not decrement the program counter, and we went on to execute the ADD instruction.
When we added 1 again, R0 seemed to reset back to 0!
What's going on?
Well, it's all to do with the fact that each memory location in our computer is a fixed size called a byte (remember that the x86 MOV to memory instruction referred to a BYTE in the previous section). What's a byte? And what's the maximum number we can represent with a byte? To understand, we need to look at binary arithmetic.
Remember in the "How does a computer work" section we talked about transistors being switches that can be On and Off. It turns out that we can represent integers just using On and Off values.
We're all familiar with decimal arithmetic - counting 0…1…2…3…4…5…6…7…8…9… When we run out of the symbols we use for numbers, we then add a second digit 10…11…12…13…14…15…16…17…18…19, then 20…21…22…23…24…25…26…27…28…29 and so on, until we run out of symbols again. Then we add a third digit 100…101…102…103…104… and so on. We start out by counting up digit1 (0-9). Then, when we add a second digit, the number is (ten x digit2) + digit1. When we add a third digit, the number is (ten x ten x digit3) + (ten x digit2) + digit1. And so on.
Here are a couple of examples (like you need examples of the number system you use every day!). But notice specifically that each digit represents an additional power of 10. So we call this "base 10" (or decimal). (Remember that anything to the power 0 is 1.)
Digit Multiplier | 100 | 10 | 1 |
(As power) | 10^{2} | 10^{1} | 10^{0} |
Base 10: 234 | 2 | 3 | 4 |
Base 10: 84 | 0 | 8 | 4 |
This is so familiar to us, we've usually forgotten that we were once taught how to interpret numbers like this. (Often, this was done with cubes of wood or plastic in blocks of 1, 10, 100 etc. when we were very tiny.)
Imagine, instead, that we only had symbols up to 7.
We'd count 1…2…3…4…5…6…7… Then, when we ran out of symbols, we would have to add a digit and start counting again 11…12…13…14…15…16…17… and then 20…21…22…23…24…25…26…27… and so on until 100…101…102…103…104…105…106…107…
As with decimal, we start out by counting up digit1 (0-7). Then, when we add a second digit, the number is (eight x digit2) + digit1. When we add a third digit, the number is (eight x eight x digit3) + (eight x digit2) + digit1. And so on.
We call this counting system "base 8" (or octal). As you might expect, each digit represents an increasing power of 8.
Here are some examples of values in base 8
Digit Multiplier | 64 | 8 | 1 |
(As power) | 8^{2} | 8^{1} | 8^{0} |
Base 10: 256 Base 8: 400 |
4 | 0 | 0 |
Base 10: 84 Base 8: 124 |
1 | 2 | 4 |
What if we had more symbols for numbers than the 10 (0-9) that we're familiar with? What if we had sixteen, say? We call this "base 16" or hexadecimal (hex for short). Rather than resort to Wingdings or Smiley Faces for the extra symbols, it is conventional to use the letters A-F as the 'numbers' greater than 9.
Digit Multiplier | 256 | 16 | 1 |
(As power) | 16^{2} | 16^{1} | 16^{0} |
Base 10: 256 Base 16: 100 |
1 | 0 | 0 |
Base 10: 84 Base 16: 54 |
0 | 5 | 4 |
Base 10: 255 Base 16: FF |
0 | F | F |
Base 10: 12 Base 16: C |
0 | 0 | C |
Now, imagine we're really symbol-poor. We've only got 0 and 1 (the "off" and "on" of the transistor-switches in our processor). It works just the same. First, we count units: 0…1 We've run out of symbols, so we add a digit and count on 10…11. Out of symbols again, so add another digit: 100…101…110…111. We run out of symbols really quickly, so we need another digit: 1000…1001…1010…1011…1100…1101…1110…1111 and so on.
Here are some examples in "base 2" (or binary).
Digit multiplier | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |
(As power of 2) | 2^{7} | 2^{6} | 2^{5} | 2^{4} | 2^{3} | 2^{2} | 2^{1} | 2^{0} |
Base 10: 255 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Base 10: 84 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
Look at the first example in that table. The decimal (base 10) number 255 is represented by a full set of 1s in each of 8 binary digits.
(Saying "binary digit" is a bit long winded, so we've shortened it in the jargon to the word bit.)
We call a number made up of 8 bits a byte.
We number the bits from the right-most (0) to the left-most (7). Because the right-most represents the smallest power of 2, we call it the least significant bit (LSB). The left-most bit represents the largest power of 2, so we call it the most significant bit (MSB).
MSB | LSB | |||||||
Bit number | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
Power of 2 | 2^{7} | 2^{6} | 2^{5} | 2^{4} | 2^{3} | 2^{2} | 2^{1} | 2^{0} |
As we've seen, the maximum (decimal) number we can store in a byte is 255 (a '1' in all 8 bits).
MSB | LSB | |||||||
Bit number | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
Power of 2 | 2^{7} | 2^{6} | 2^{5} | 2^{4} | 2^{3} | 2^{2} | 2^{1} | 2^{0} |
Base 10: 255 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
The fact that something screwy happens when we exceed this 8-bit maximum value hints that our register R0 is probably 1 byte in size.
But why does something screwy happen?
To understand that, we need to learn about binary addition.
We're so used to doing decimal addition, that we probably don't even think about it. But let's remind ourselves how we add up regular decimal numbers by hand.
We write the numbers to be added in a stack, with each equivalent digit lined up in columns. Then, starting from the right hand column (which, remember, we call the least significant) we add up the total of that column and record it at the bottom. If the total is greater than 9 (the biggest number we can represent in a single column), we "carry" that number into the next column, and include it in the addition for that column.
If a particular number has no value in the column, we treat it as a 0
Here are a couple of examples
054
+ 163
-----
217
carry 1
0099
+0999
----
1098
carry 111
We can do exactly the same thing for binary addition. Here's an example of two bytes being added together:
00000101
+00001011
--------
00010000
carry 1111
Now, let's see what happens when we add 1 to a byte containing the decimal value 255:
11111111
+00000001
--------
100000000
carry 11111111
But, hang on a minute - that's 9 bits! And a byte can contain only 8 bits. We can't just make up a new '9th' bit.
What happens is that we're left with the least significant 8 bits (i.e. the right-most 8 bits) - and they are binary 00000000 - or 0 in decimal!
And that's why our program wrapped round to 0 when we added 1 to 255.
It's all very well knowing this limitation, but how can we represent larger numbers?
One way is to increase the number of bits in your storage.
It turns out that our computer has some larger registers called WR0
, WR1
, and WR2
that can do just that. Each of these registers can store values up to 16 bits (or two bytes) in size. We sometimes call a 16 bit value a short (and on computers with a 16-bit heritage, a word).
Spot test: What's the largest number you can represent with 16 bits?
Hint: 2^{8} = 256
Answer: If the largest decimal you can store in 8 bits is 255 (= 2^{8} - 1), then the largest you can store in 16 bits is (2^{16} - 1) = 65535
With n bits, you can store a positive decimal integer up to (2^{n} - 1).
Let's load up a number larger than 255 into one of these 16 bit registers. We'll create a new program to do that.
LOAD WR0 16384
EXIT
If you run that, you should see the following output. 16384 is loaded into the WR0 register.
load wr0 16384
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=1, FL=0
0 0 0 0 0 0 0 0 0 0
exit
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=65535, FL=0
0 0 0 0 0 0 0 0 0 0
But what happens when we write the contents of that register to memory?
LOAD WR0 16384
WRITE 0 WR0
EXIT
This time the output looks like this:
load wr0 16384
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=1, FL=0
0 0 0 0 0 0 0 0 0 0
write 0 wr0
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=2, FL=0
0 64 0 0 0 0 0 0 0 0
exit
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=65535, FL=0
0 64 0 0 0 0 0 0 0 0
It didn't write 16384 to the memory location at offset 0, it wrote 64 to the memory location at offset 1?!
Try it with a different value: (16384 + 1) = 16385
LOAD WR0 16385
WRITE 0 WR0
EXIT
This is the output:
load wr0 16385
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=1, FL=0
0 0 0 0 0 0 0 0 0 0
write 0 wr0
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=2, FL=0
1 64 0 0 0 0 0 0 0 0
exit
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=65535, FL=0
1 64 0 0 0 0 0 0 0 0
This time, it has stored 1 in the memory at offset 0, and 64 at offset 1.
What will the result be if we if store 16386? Have a guess before you look at the answer.
LOAD WR0 16386
WRITE 0 WR0
EXIT
Here’s the answer:
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=65535, FL=0
2 64 0 0 0 0 0 0 0 0
Did you guess right? OK - how about 16640? Again, have a guess before you look at the answer. (There's a hint just below if you need it.)
Hint: 16640 = 16384 + 256
LOAD WR0 16640
WRITE 0 WR0
EXIT
And here's the answer again:
R0=0 R1=0 R2=0, WR0=16640, WR1=0, WR2=0, PC=65535, FL=0
0 65 0 0 0 0 0 0 0 0
If you haven't spotted the pattern yet, this should give you an even bigger clue - what happens if we store the number 256?
LOAD WR0 256
WRITE 0 WR0
EXIT
R0=0 R1=0 R2=0, WR0=16640, WR1=0, WR2=0, PC=65535, FL=0
0 1 0 0 0 0 0 0 0 0
It is rather like these two memory locations are storing the number in base 256!
We've seen that we can hold the numbers 0-255 in 1 byte, so when we need to store a larger number, we add a second byte, and count in the usual manner (256 * second byte) + (first byte). You could imagine storing a 24 bit number by adding a third byte, and a 32 bit number by adding a fourth byte and so on. (We sometimes call a 32-bit number a dword, from 'double word' and a 64-bit number a qword, from 'quadruple word'.)
Byte | 1 | 0 |
(As power of 256) | 256^{1} | 256^{0} |
Base 10: 256 | 1 | 0 |
Base 10: 16384 | 64 | 0 |
As with our Most Significant Bit and Least Significant Bit, we call the byte that stores the largest power of 256 the Most Significant Byte, and the other the Least Significant Byte - or sometimes High Byte and Low Byte.
When we were ordering the bits in our byte, we numbered them from right-to-left, low-to-high. You'll notice that when we store the 16bit number in our 2 8-bit memory locations, we're storing the high byte in the memory location at offset 1, and the low byte in the memory location at offset 0.
High (most-significant) byte | Low (least-significant) byte | |
Offset in memory | 1 | 0 |
(As power of 256) | 256^{1} | 256^{0} |
We could equally well have chosen to do that the other way around!
Computers that choose this particular byte ordering are called little endian memory architectures. Our computer does it this way, as does Intel's x86 series. The Z80 and PowerPC are both big endian - they store the high byte in the lower memory offset, and the low byte in the higher memory offset. Some architectures (like the ARM) let the programmer choose which to use.
(Another quick jargon update: when we send data over a network, it is often encoded in a big-endian way, so you sometimes see big-endian ordering called network ordering.)
This is an example of encoding a value into computer memory, in this case, a positive integer. Notice that even with something as simple as a positive integer, we have to think about how it is represented in the computer! We'll see this need for encoding again and again for decimals, text, images and all sorts of data. We need to encode whenever we have to represent some information in a computer. Most of the time, higher-level languages or other people's code will take care of the details for us, but if we forget about it entirely, we open ourselves up to all sorts of seemingly-mysterious behaviour and bugs.
Sometimes, displaying numbers in decimal is not the most convenient way to read them - particularly when we are looking at multi-byte values.
Here's the decimal number 65534 represented as two decimal bytes
Low byte | High byte |
254 | 255 |
Now, look what happens if we represent it hexadecimal (base 16) instead of decimal.
Here's 65534 in hex:
FFFE
And here it is represented as two hexadecimal bytes in memory
Low byte | High byte |
FE |
FF |
Notice that they are represented using exactly the same digits (barring the byte-ordering). Whereas the decimal values "255 254" look nothing like "65534".
It can be very convenient to get used to reading numbers in base 16 (hex), because each byte is always just a pair of hex digits, and their representation in memory is very similar to the way you would write them on the page.
We've got a handy switch in our program that lets us start to represent the output in hex, instead of decimal.
Just below your program, you'll see the following lines:
(* THIS IS THE START OF THE CODE THAT 'RUNS' THE COMPUTER *)
let outputInHex = false
Change this second line to read:
let outputInHex = true
Now, let's update our program to load 65534 in decimal into memory:
LOAD WR0 65534
WRITE 0 WR0
EXIT
And run the program again. The output is now displayed in hex.
load wr0 65534
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
00 00 00 00 00 00 00 00 00 00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
exit
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
What if we want to write the values in our program in hex? How do we distinguish between the decimal value 99, and the hex value 99 (=153 decimal)? Different languages use different syntax, but our model computer supports two of the most common - both of which use a prefix. You either add the prefix 0x, or use the prefix #.
(You might have seen that # prefix used when specifying colours in HTML mark-up)
LOAD WR0 #FFFE
WRITE 0 WR0
EXIT
Or
LOAD WR0 0xFFFE
WRITE 0 WR0
EXIT
Try updating the program to use one of these hex representations and run it again.
load wr0 0xFFFE
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
00 00 00 00 00 00 00 00 00 00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
exit
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
If you want to, you can also use a binary notation to specify a number. For that we use the prefix 0b
(This is a less common notation, and is not supported across all languages, but it is quite useful.)
LOAD WR0 0b1111111111111110
WRITE 0 WR0
EXIT
Here's the result of running that version of the program.
load wr0 0b1111111111111110
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
00 00 00 00 00 00 00 00 00 00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
exit
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
So far, (almost) all of the numbers that we've represented have been the positive integers.
How do we represent negative numbers?
This is much the same question as "how do we do subtraction". Why? You'll probably recall from basic maths that subtracting the number B from the number A is equivalent to adding the negative of B to A. You might have seen that written down in this form:
(A - B) = A + (-B)
So, perhaps we can use the idea of subtraction to help us to represent a negative number?
We've already seen a sort of example of subtraction: what happened when we saw the addition of 8-bit numbers carry beyond an 8-bit value? Let's remind ourselves.
For 8-bit values, if we try to add two numbers such that the result would be greater than 255, we are left with the least significant 8-bits, and we drop the carried-over 9th bit.
Here are a few examples
255 + 1 = 0 (with carry)
255 + 2 = 1 (with carry)
255 + 3 = 2 (with carry)
(You can write a short program to test that, if you like.)
load r0 255
load r1 255
load r2 255
add r0 1
add r1 2
add r2 3
exit
What we know about basic arithmetic tells us that:
255 - 255 = 0
255 - 254 = 1
255 - 253 = 2
If we replace the right-hand-side of the first expression with the left-hand-side of the second expression, we get
255 + 1 = 255 - 255
255 + 2 = 255 - 254
255 + 3 = 255 - 253
This would seem to imply that
1 = (- 255)
2 = (-254)
3 = (-253)
How is this possible?!
Well, we've already answered that question: it is precisely because there is a maximum number we can represent in a fixed number of bits, and the way that larger numbers carry over.
Let's represent those numbers in binary:
00000001 = -(11111111)
00000010 = -(11111110)
00000011 = -(11111101)
A quick rearrangement of the expression shows us that that that is really true!
(remember that the expression a = -b is equivalent to a + b = 0)
00000001 + 11111111 = 00000000 (with carry)
00000010 + 11111110 = 00000000 (with carry)
00000011 + 11111101 = 00000000 (with carry)
In fact, we could write a little program to test that:
load r0 0b00000001
load r1 0b00000010
load r2 0b00000011
add r0 0b11111111
add r1 0b11111110
add r2 0b11111101
exit
So, for every positive number that we can represent in binary, there is a complementary negative number.
We call this representation of a negative number the two's complement representation.
It is fairly easy to calculate - you just take the binary representation of the absolute number (the number without its sign) and flip all of the 0s to 1s, and 1s to 0s (we call this the one's complement of the number), then add 1, to make the two's complement.
Let's have an example of that: what's the two's complement representation of the number -27, in an 8-bit store?
First, we need to represent 27 in binary:
00011011
Then we flip all the bits to get the one's complement representation:
11100100
Finally, we add 1 to get the two's complement representation:
11100101
So, -27 in two's complement binary notation is 11100101
We can test this
In decimal, 60 - 27 = 33
60 in binary is 00111100
-27 in two's complement binary notation is 11100101
So, 60 - 27 in binary is 00111100 + 11100101
That is 00100001 (with a carry), which is 33 in decimal, as required!
Originally, we stressed that we could store positive integers from 0 - 255 in an 8-bit value.
Now, we have added the possibility of storing both positive and negative integers in an 8-bit value, by the use of a two's complement representation.
So, every number has an equivalent two's complement "negative" number. How do we tell the difference between 11100101 when it is meant to be -27 and 11100101 when it is meant to be 229?
The short answer is that you can't - it is up to the programmer to determine when a result might be negative and when it might be positive.
We could specify (by documentation, for example) that a particular value we were storing was unsigned (i.e. a positive integer) or signed (i.e. any integer, positive or negative).
It turns out we can remove the ambiguity in the case of the signed integer by limiting the maximum and minimum values we can represent.
Think about the decimal numbers 1…127.
These can be represented in binary as 0b00000001…0b01111111.
Now, think about the decimal numbers -127…-1.
These can be represented in binary as 0b10000001…0b11111111.
Notice how the positive numbers all have a 0 in the top bit, and the negative numbers all have a 1 in the top bit. We often call this the sign bit.
If we choose to limit the numbers we represent in a single byte to numbers that can be represented in 7 bits, rather than 8, then we have room for this sign bit, and there is no ambiguity about what we mean. But how can we tell whether some memory is signed or not? The short answer is that we can't. The only way to tell is to document how the storage is being used. We're reaching the limits of what we can conveniently express in such a low-level language. We need to move to a higher-level of abstraction, and a richer language, to help us with that.
There's a little wrinkle that happens when we copy a value from a smaller representation. Here's decimal 1 as a byte
0b00000001. And here's decimal 1 as a 16-bit value: 0b0000000000000001. So far so good.
What about -127? That's 0b10000001 in a byte, but it's 0b1111111110000001 when stored in a word.
Notice that in each case, the sign bit gets extended across the whole of the most-significant byte when you copy from a byte to a word (the zero for the case of a positive number, or the 1 for a negative number.)
This sign extension when you copy between storage sizes is very important: 0b10000001 is -127 decimal when stored in a byte, but naively copy that into a word and it becomes 0b00000000100000001, which is 129 decimal!
There's an obvious danger with copying numbers from a larger representation (e.g. a word) to a smaller one (e.g. a byte) and that's called truncation.
There's no problem if the number can be fully represented in the number of bits of the target representation e.g. 1 is 0x0001 in 16 bits, and 0x01 in 8 bits - you just lose the high byte. Similarly, -1 is 0xFFFF in 16 bits, and 0xFF in 8 bits - you just lose the high byte again, and it remains correct. But what about 258? That's 0x0102 in 16-bits, but if you lose the high byte, it becomes 0x02 - not the number you were thinking of at all!
Most higher level languages will warn you about possible truncation (or even prevent you from doing it directly). But it is a very common source of bugs in programs.
In this section, we've looked at how we can represent numbers (specifically the integers) in data memory.
We've learned how we can convert the number between different bases - in particular base 10 (decimal), base 16 (hexadecimal) and base 2 (binary).
We then looked at binary arithmetic, and saw what happens when we add binary numbers together.
We investigated the largest value we can store in one of our registers, and determined that it represented an 8-bit value (which we call a byte).
We then looked at what happens when we try to store a number larger than that which a particular register can hold, and introduced the concept of a 16-bit number.
Next, we looked at how a 16-bit (or larger) number can be represented in memory, including the concepts of big-endian and little-endian memory layouts. We also switched our computer over to displaying values using hex instead of decimal, and looked at why we might want to do that.
Finally, we looked at how to store negative numbers in a consistent way, such that we could perform basic addition and subtraction, by using the two's complement representation, and how to constrain the range of numbers we can store, to remove the ambiguity when storing positive and negative numbers.
Next time, we'll look at how we can bring logic to bear on our problems.
Learning To Program – A Beginners Guide – Part One - Introduction
Learning To Program – A Beginners Guide – Part Two - Setting Up
Learning To Program – A Beginners Guide – Part Three - What is a computer?
Learning To Program – A Beginners Guide – Part Four - A simple model of a computer
Learning To Program – A Beginners Guide – Part Five - Running a program
Learning To Program – A Beginners Guide – Part Six - A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven - Representing Numbers
Learning To Program – A Beginners Guide – Part Eight - Working With Logic
Learning To Program – A Beginners Guide – Part Nine - Introducing Functions
Learning To Program – A Beginners Guide – Part Ten - Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks - Recursion in F#
Supporting IE7 with any modern web framework is a bit of a pain.
First, we made sure that we followed all of the Angular guidelines for using IE7.
We made sure we included HTML5 shim and JSON3, plus Respond for responsiveness, and *almost* everything was rendering correctly, except for a horizontal row of elements produced by an ngRepeat element, which were too wide for the space, and wrapped underneath one another.
Looking closely, we observed that the first element had some unnecessary left margin. This stems from an IE7 bug that has long been fixed in IE8 and above, relating to the :firstChild
pseudo class.
When Angular injects the relevant elements into the DOM to fulfil your ngRepeat request, it also injects a comment, like this:
On all other browsers, this comment is ignored when it comes to the pseudo-class :firstChild. On IE7 it is not, so your first actual element does not match the selector, and the css is not applied.
Sadly, Bootstrap uses this selector to set the left margin on that element to 0, to ensure the layout fits correctly.
Fortunately, Angular comes to the rescue in the form of $index
This gives us the zero-based index of the current iteration of the repeat. We can use this to add custom classes for our first and last elements.
<div ng-repeat="page in pages" ng-class="getPageClass($index)">
// Do our stuff...
</div>
In our controller, getPageClass($index)
builds the class list, adding our custom first and last classes when appropriate.
And we can then target it with some custom styles.
]]>A. Write a program to add up the integers from 1 to 10, and put the result in to the memory location at offset zero.
B. Alter the program to add up the integers from 1 to 15 and put the result in to the memory location at offset zero.
C. Alter the program again, to add up the numbers from 5 to 15, and put the result into the memory location at offset zero.
If you wrote a program for problem (A) which ended up with the number 55 in memory location zero, then congratulations – you have not only written your first program solo, but you’ve also devised your first algorithm: a description of a process required to take some input and produce a desired result.
In this section, we’ll look at some programs I’ve devised to solve these 3 related problems. Your program may not look exactly like any of mine, and if you got the right number in the right place, then that’s great. There’s no 100% right answer – you’re always making compromises and improvements. And even if you did get the right answer, it is always interesting to look at other people’s solutions, and see if there’s something you can learn from them.
We’re going to evolve our solution from a very simple-minded approach, to a more sophisticated algorithm, and look at the implications and compromises we’re making along the way.
So, here’s a first stab at an algorithm for solving problem (A). It is not a very complicated one: we could do the addition one line at a time, long-hand.
load r1 0
add r1 1
add r1 2
add r1 3
add r1 4
add r1 5
add r1 6
add r1 7
add r1 8
add r1 9
add r1 10
write 0 r1
exit
(Notice that we remember to set the accumulated total to zero before we start – we don’t know what might have been in the register before we began.)
That’s a pretty decent stab at it. It gives us the right answer, and only takes 12 instructions to do it.
When we go to problem (B), it starts to look a bit less great. I got a bit bored typing this in, even with the help of copy and paste.
load r1 0
add r1 1
add r1 2
add r1 3
add r1 4
add r1 5
add r1 6
add r1 7
add r1 8
add r1 9
add r1 10
add r1 11
add r1 12
add r1 13
add r1 14
add r1 15
write 0 r1
exit
Imagine we wanted to add the whole sequence for 1…10,000! That’d be a lot of instructions. 10,003 to be precise! In fact, I don’t think we’ve got enough program instruction memory in this toy computer to do that. Also, don’t forget that every instruction we execute takes a little bit of time. If we’re running in the public cloud that directly costs us a little bit of money (or indirectly via our electricity bill if we’re running at home or in the office, in the form of the little bit of energy we expended in the execution of each instruction). So, ideally we’d also like to execute as few as possible.
It turns out that these are often competing constraints.
Let’s look at the first part of this challenge. First – can we make our program more compact, such that we could reasonably sum integers up to 10,000 or more? To do that, we need to understand both the problem, and the algorithm we are using to solve it. Let’s start by describing it more precisely.
There are lots of ways of describing an algorithm – we can do it in code, in everyday language, mathematically, or graphically, for example.
Instead of using more words, let’s look at a graphical representation of our first algorithm, called a Flow Chart.
Long, isn’t it, but quite expressive?
We’re using four symbols in this diagram.
The start or end of the algorithm | |
A process or operation to be executed | |
Some data | |
The flow of control through the algorithm (hence ‘flow chart’) |
It gives a nice, clear, step-by-step view of what we are doing. Notice that many of these steps are the same. If we were trying to explain this to someone in ordinary language, we might say
To start, set the result to zero.
Then, add 1 to the result.
Then, add 2 to the result.
(… and keep doing that sort of thing until …)
Then, add 9 to the result.
Then, add 10 to the result.
Then, you are done.
Anybody with some really basic maths would understand what you meant. But, unfortunately, computers are (in general) a bit slow on the uptake, and need everything explained to them in precise detail. How could we leave out those steps where we’ve used a hand-wavy description to avoid repeating ourselves, and yet still describe precisely what needs to be done?
One possible way is to use a loop.
Let’s re-describe that process in ordinary language, and find some way of explaining the boring repetitive bit, such that we don’t just wave our hands and skip over it. Here’s my attempt.
To start, set the result to zero.
Then set the current number to 1
Then add the current number to the result
Then add 1 to the current number to give the next number
If the current number is less than or equal to 10 then go back and repeat the previous two steps, otherwise you are done.
If we want to be really precise, we could write this down again in language somewhere between an actual program and ordinary language. We call it pseudocode. Given the low-level language you’ve been dealing with up to now, this should be fairly easy for you to interpret – and that’s the idea. It isn’t any particular programming language, but it is just as precise as any programming language, so more or less any programmer should be able to understand it. We’re also being meticulous about describing what the algorithm is supposed to do, what input it needs, and how to interpret the result.
Description: Calculate the sum of a particular sequence of consecutive positive integers
Input: The first number of the sequence, and the last number of the sequence an
Output: The sum of the integers between a1 and an(inclusive)
Notice we’re using a mixture of ‘ordinary language’ and mathematical symbols to describe precisely what our algorithm does.
There are a couple of terms that you might need help with:
First, you should read as “result changes to 0″. The arrow is just a shorthand for a phrase like “changes to” or “becomes”.
Then there’s:
You can read this as “while the current value is less than or equal to the last number in the sequence, keep doing the steps in the list below”. Notice that we’ve used indentation to make it easy to see where that list of steps starts and ends. Those steps can be read as “the result changes to the value of result plus the current value”, and “the current value change to the current value plus 1″.
This pattern, where you go back and repeat a previous step based on some condition is very common in imperative programming, and we call it a loop. (This specific example is called a while loop for obvious reasons.) You can see why we call it a loop if we express this program as a flow chart.
The first thing to notice is that we’ve added an extra symbol to our set, the diamond for a decision:
The start or end of the algorithm | |
A process or operation to be executed | |
Some data | |
The flow of control through the algorithm (hence ‘flow chart’) | |
Decision |
The diamond has 2 arrows coming out of it, one labelled ‘yes’ for the path to take if the statement in it is true. The other labelled ‘no’ for the path to take if the statement is false. Notice how the ‘yes’ arrow ‘loops back’ up the diagram from the decision point, to repeat a part of the process.
Let’s restructure the program to implement this new, looping version of the algorithm. We’re going to make use of registers to remember the minimum and maximum values of our sum, and take advantage of our COMPARE
and JUMPLTE
instructions to minimize the amount of repetitive code we have to write.
This program uses r0
to store the current number to be added, r2
to store the maximum number to be added, and r1
to accumulate the result. I’ve numbered the lines to make it easier to calculate the offset when we jump, and added some details comments. (You should leave both of these bits out if you edit your program and run it in our model computer.)
load r2 10 ; We want to add up the numbers up to 10
load r0 1 ; We want to start with 1
add r1 r0 ; add the current number in r0 to the total in r1
add r0 1 ; increment the current number by 1
compare r0 r2 ; compare the current number with the maximum number
jumplte -3 ; if the current number is less than or equal to the maximum number, jump up 3
write 0 r1 ; otherwise, write the accumulated total from r1 into memory location at offset 0
exit
The critical bit of code in this program that implements the loop is this:
add r1 r0
add r0 1
compare r0 r2
jumplte -3
write 0 r1
Here’s how it works.
Once we’ve added the current number stored in r0
to our accumulated total in r1
, we increment (“add 1 to”) r0
so that it contains the next number to add. r2
still contains the largest number we are interested in.
So, if the current number (r0
) is less-than-or-equal-to the maximum number (r2
), then we go back up 3 instructions. This takes us back to the line that adds the current number to the total. We keep going round and round like this, each time jumping back to the add
instruction.
What happens when the current number is greater than the maximum number? We’ve seen before that in that case the JUMPLTE
instruction does nothing, so we will step on and write the result into memory and exit.
Let’s redraw the flowchart, with our actual instructions in the boxes so you can see how precisely similar they are.
(We’ve added a hexagonal symbol on the diagram, which we use to represent steps that are purely a preparation for a comparison.)
Now that we’ve re-written the algorithm like this, it is easy to adapt it to solve problems (B) and (C) from the previous exercise.
For problem (B), to change the range of numbers to add from 1 to 15, we simply increase the value stored in r2
load r2 15 ; We want to add up the numbers up to 15
load r1 0 ; Set the accumulated total to zero
load r0 1 ; We want to start with 1
add r1 r0 ; add the current number in r0 to the total in r1
add r0 1 ; increment the current number by 1
compare r0 r2 ; compare the current number with the maximum number
jumplte -3 ; if the current number is less than or equal to the maximum number, jump up 3
write 0 r1 ; otherwise, write the accumulated total from r1 into memory location at offset 0
exit
For problem (C), to change the start of the range, we change the value stored in r0
:
load r2 15 ; We want to add up the numbers up to 15
load r1 0 ; Set the accumulated total to zero
load r0 5 ; We want to start with 5
add r1 r0 ; add the current number in r0 to the total in r1
add r0 1 ; increment the current number by 1
compare r0 r2 ; compare the current number with the maximum number
jumplte -3 ; if the current number is less than or equal to the maximum number, jump up 3
write 0 r1 ; otherwise, write the accumulated total from r1 into memory location at offset 0
exit
That’s great – we could imagine increasing the total of numbers in the sequence to any arbitrarily large number, and we wouldn’t run out of program memory.
Let’s remind ourselves of the original program:
load r1 0
add r1 1
add r1 2
add r1 3
add r1 4
add r1 5
add r1 6
add r1 7
add r1 8
add r1 9
add r1 10
write 0 r1
exit
This needed as many instructions as there were numbers to add, plus a few instructions to set up and finish. We say that the number of instructions in the program instructions memory scales linearly with the number of items to process, or is of order n (where ‘n’ is the number of items to be processed). We have a shorthand notation for this called big O notation:
In general, we estimate the order of an algorithm by finding the bits that grow the fastest as we add more items into the input.
We could represent the storage cost of our algorithm diagrammatically, drawing a box scaled to the number of instructions required to implement each part of the algorithm.
Now, imagine that we double the number of items in the sequence and sketch it again (reducing the scale by half so it fits on the page)
And double the number of items again, reducing the scale by half once more.
Each time, the central “add” section becomes a more and more significant part of the storage cost of the algorithm.
Now, imagine a huge number of items. The contribution of the start and finish become vanishingly small, and the budget is taken up (to all intents and purposes) entirely by the central piece of the work, so we can ignore them when we estimate the order of our algorithm.
So, back to our program. With the new version, no matter how big the number, we only need a program 9 instructions long. So, we’ve really optimized our program storage. The storage requirement of this program is constant, or of order 1: O(1).
(We will look at how to calculate this more formally later on.)
This is clearly a good thing. But, if you look closely, you’ll realise we’re actually executing a lot more instructions than we were before. To add each number in the original program, here’s the code we add to execute:
add r1 3
But to add each number in the new program, here’s the code we have to execute:
add r1 r0
add r0 1
compare r0 r2
jumplte -3
Four instructions in the new program for each instruction in the old program. So the new program is more computationally expensive. About four times more, in fact, if you ignore the few instructions they both have for setup and completion, and you assume that all those instructions take the same amount of time to execute (which they almost certainly don’t – that jumplte
is probably more effort than an add).
This may (or may not) be significant to us.
If we think about the computational cost of both implementations, they scale linearly with the number of items in the collection, so both the old and the new algorithm are O(n), but the new one is more expensive – especially for just a few items in the sequence. It could be cheaper to write the instructions out individually if we have less than, say, 4 numbers in the sequence, than it is to use the loop.
We call this process of trying to find a better, cheaper way to write our algorithms optimization. In this case, we’ve optimized our program for program storage, at the expense of some speed. We’ve made a significant improvement in the former (going from O(n) to O(1)), while still maintaining our O(n) computation cost – albeit that the new algorithm is potentially 4 times slower.
You will often find that you are making these trade offs – space for time, or vice versa. Sometimes, you discover that an optimization for space actually improves the time as well – if a significant portion of the time was taken allocating or deallocating resources.
There’s another point to make about optimization, though. I’m using a lot of contingent words like ‘could’ and ‘may or may not’ in terms of the computational cost or benefit of one algorithm or another. This is because we haven’t measured it. And while it is almost certainly the case that an order-of-magnitude improvement in the efficiency of an algorithm in either storage or computation (or both) will improve the overall performance of your software, that is by no means guaranteed, especially where there are complex interactions with other parts of the system, and different usage patterns. We could expend a week’s effort trying to improve this algorithm, get it as good as it could possibly be, and yet it would still not give us any real benefit, because the end users only cause the code to be executed once a month, it completes in 50ms, and that happens overnight as part of a job that takes 20 hours to run. Worse still, we didn’t measure how long it took in a variety of real-life situations to provide a baseline, and then repeat those measurements to see if our so-called “optimized” version has actually had a positive effect
One of the things programmers, in general, are very bad at doing, is optimizing irrelevant things. We call this premature optimization. We also tend to focus on the optimization of little details. We call this micro-optimization. It is very tempting to get drawn into this as we have to be so focused on the minutiae of the code we are writing, that we can lose the big picture. Our optimizations can also make the code harder to read, and more difficult to maintain – and perhaps also more prone to bugs.
You could argue that both the programs we’ve written so far implement the same basic algorithm – they step through each item in the sequence in turn, adding it to the total. The second version uses a loop to do that. The first version just expanded it so that there is an instruction that represents each iteration around the loop (we call that loop unrolling).
If we’re going to significantly improve both the storage requirements and the computational cost of this program, we’re going to need an algorithm that is O(1) for both storage and computation. Algorithm selection almost always has a bigger impact on performance than micro-optimizations. So, can we come up with a better one?
Let’s look at the sum of the numbers from 1 to 15 again.
We can write this as (leaving out some of the terms in the middle to avoid getting too bored…)
Equally we could start at the other end, and write it as
We get exactly the same answer.
Now we’re going to do a bit of simple algebra – we’re going to add these two equations together. In this case, the equations are so simple that we can do that by adding up all the terms on the left hand sides, and all the terms on the right hand sides. I’ve coloured them according to the original equations so you can see where the terms came from.
Let’s look at the right hand side of equation 3 more closely. For each of the 15 terms in our original sum, we’ve still got exactly one term – and it’s an addition. Moreover, that sum is always the same – in this case 16. In fact, we’ve got 15 lots of 16.
Rewriting that again, then
And dividing through by 2:
Which looks very much like the right answer.
Before we delve into this further, let’s try that again for the case when we’re adding up the numbers from 5 to 15.
There are 11 numbers between 5 and 15 (inclusive). First, we’ll write them out from low to high
And then from high to low
Then add them together
Again, we have one term on the right hand side of equation 3a for each of the 11 terms we had in the original sum, and this time each of those terms adds up to 20.
So,
Right again.
Now, let’s think about the general case of some starting integer m to some general maximum integer n.
First, how do we calculate the number of terms in this case?
Well, in the case of the sequence 1…15, there are 15 terms, which is (15 – 1) + 1.
For the sequence 5…15 there are 11 terms, which is (15 – 5) + 1.
In the general case, there are (n – m) + 1 terms, i.e.
Another way of looking at the last term ‘n’ is to say that it is the starting term, plus the number of terms, less one (to account for the fact that we’ve already included the first term.) i.e.
From our previous equation, we can see that
So, what does our sum look like if we write it out in these general terms?
Here’s the version starting with the first number in the sequence, m.
Notice how we’ve expanded the last few terms to calculate them using our expression for the number of items in the sequence. This looks a bit redundant: after all, m + (n-m-1) = n-1 and m + (n-m) = n, but you’ll see why it is important we express it that way in a moment.
As before, we’ll write that the other way around, high to low, expressing it in terms of the highest number in the sequence, and our expression for the number of terms.
And then add them up again
Here, you can quickly see that each term adds up to (m + n) – which is suggestive of a solution. (Note that this isn’t a proper proof! Anything could be happening in those bits in the middle. But, as it happens, this solution is correct, and generalizable.)
So, just as before, we can write our equation as
2S = (number of terms) * (value of each term) , or
If we divide both sides by 2, we have a general formula for calculating the sum of all the integers from m to n:
Or we could use the mathematical sigma notation – it means the same thing: “the sum of each i from i = m to i = n”.
Armed with this new algorithm, we can write a much better program to calculate the sum, in terms of both the space it needs, and the number of instructions it has to execute.
Can you devise a program to calculate the sum of any sequence of consecutive, positive integers?
Hint: There are instructions in our computer called mul
and div
which perform multiplication and division just like our add
and sub
instructions.
A geometric progression is one in which each term is a constant multiple of the previous term:
For example 3, 6, 12, 24, 48 is a geometric progression where
The formula for the sum of a geometric progression (which we call a geometric series) where a is the value of the first term in the sequence, and r is the constant multiplier is:
Implement a program that can calculate the result of the geometric series for any arbitrary values of a, r, m and n.
Can you estimate the compute cost of your implementation, using big O notation? What about the storage cost?
We’ve learned several important things in this section:
In the next section, we’re going to look at the data memory in more detail, and learn how we can represent numbers in a computer.
Learning To Program – A Beginners Guide – Part One – Introduction
Learning To Program – A Beginners Guide – Part Two – Setting Up
Learning To Program – A Beginners Guide – Part Three – What is a computer?
Learning To Program – A Beginners Guide – Part Four – A simple model of a computer
Learning To Program – A Beginners Guide – Part Five – Running a program
Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven – Representing Numbers
Learning To Program – A Beginners Guide – Part Eight – Working With Logic
Learning To Program – A Beginners Guide – Part Nine – Introducing Functions
Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#