To avoid this, you can inject the $injector service into the templateRepository, and defer the dependency resolution until you make the call.

Something like this:

]]>For example, we might have resources like this:

And one view that binds to that first resource type

And another that binds to that second resource type

Notice that the views have controllers specific to the content type of the resource they expect to present, and bind to properties of the content of that type.

Let’s assume that there’s a parent controller that gets this resource from some url that may return one of a number of different ‘greeting-like’ resources of this kind (think of it as a polymorphic type in classical OO terms). This then presents the appropriate view by doing an ng-include of the relevant view, using the content type.

If you try this at home, and look at your network traffic, you’ll see that the ng-include directive uses the $templateCache to go off and try to find the view. Unfortunately, this makes an HTTP request for a resource at “/application/vnd.endjin.test.greeting”.

While you could construct a controller which returns an appropriate resource at that path, what we’d really like to do is to translate this into a request for an appropriate view for the content type.

One way to do this would be to provide a mechanism on our parent controller to do this translation:

And then call that from our view

Of course, this would require changing all our controllers (even if we wrapped the actual work up into a service they all consume).

And the code in the view is pretty ugly, too.

A better approach might be to create a *decorator* for the $templateCache to do the lookup for us, behind the scenes.

Here’s an example using that technique:

The very cool thing about this is that I can have multiple view providers (e.g. for different families of content types), and each can register its own decorator, and they will all get a go!

The templateRepository service that this code depends on abstracts the lookup of the template content type (so it could be as simple as a loop back through the $templateCache for a specific named template).

Obviously, you could make this more complex – defer a lookup over HTTP using promises, for example, but the principle is the same.

And our view goes back to being very clean and simple:

]]>The first version, developed by Mark Otto and Jacob Thornton at Twitter in 2011, was a pure CSS library. It was designed to provide a solid foundation for the most common user interface tasks that front-end developers faced every day, including grids and page layouts, typography, navigation, input and forms. It hid the chronic problems of layout incompatibilities between different browser versions, and offered a consistent styling and customization mechanism with the LESS CSS dynamic stylesheet language. Clearly, it fulfilled that need very successfully: by 2012 it had become the most watched and forked project on GitHub.

But 2011/2012 was also the year that SmartPhones and Tablets exploded onto the consumer scene. In the UK alone, 34.3% of the population had SmartPhones in 2011, and by 2012 this figure had grown to 41.7%. In the US, there were over 28 million iPad users in 2011, growing to over 53 million in 2012. (Data from New Media Trend Watch.)

Web developers quickly learned that the experience of browsing on these mobile devices with their smaller screens, limited bandwidth and touch (or pen) interfaces was very poor.

The first response was to try to develop “mobile” versions of existing sites. These chopped out functionality, using much simpler page layout and smaller (or no) images to help reduce the bandwidth demands, and fewer complex user interactions or animations that mobile browsers couldn’t support, or don’t work so well when you’re prodding a screen with a finger or stylus. This was so-called **graceful degradation**. It was usually implemented by detecting that you were running on a mobile browser, and redirecting the user to a special mobile-only site.

However, there was another school of thought. Steven Chameon, at a SXSW conference in 2003, had coined the term **Progressive Enhancement** for an approach to design that starts out by considering the simplest version (using very basic mark-up and navigation features that are available on all browsers), and enhances it if more features are available on the user’s client platform, by linking external CSS and JavaScript. The progressive enhancement movement also embraced the need for developers to consider issues of accessibility and semantic structuring of content.

The SmartPhone revolution of 2011 brought these ideas into the mainstream. The notion of designing for the simplest version first became known as **mobile first**, and the notion of seamlessly enhancing and adapting layout and interactions with CSS and JavaScript became known as **responsive design**. Responsive design was particularly appealing to developers who struggled with the costs and challenges of developing and maintaining two separate sites for mobile and desktop clients, and it became one of .Net magazine’s top trends for web development in 2012.

The Bootstrap developers were aware of the need to support responsive design techniques, and in 2012 they released Version 2 of the framework. This included some (optional) files that used the newly enhanced CSS3 `@media`

queries (which were widely supported by both desktop and mobile browsers) to allow the grid and other layout elements to adapt to different screen sizes. You could choose whether to use a **fixed **grid (where the grid columns are fixed pixel size), a **breakpoint **approach (where the grid switches size at particular thresholds corresponding to common screen-widths) or a **fluid **approach (where the grid adapts seamlessly as the display size changes on a particular device, or as a browser window is resized).

By 2013, it became apparent that it would probably be less than a year before a majority of page views on the web would be from mobile devices (or, at least, non-desktop devices, including tablet, mobile and smart TVs). The argument for a mobile first design philosophy becomes much stronger if the majority of your visitors might be coming in on a mobile device!

With that in mind, the Bootstrap team took a strategic decision to bake responsive design into the core of the framework for their version 3 release, and to encourage a mobile-first approach.

Their key goals were:

- To simplify the CSS to make it smaller, and quicker to render on low-powered devices. This included changing their default styles to remove gradients and shadows (which are still not well supported across all browsers, and expensive to render), focusing on a clean, flat colour scheme.
- To make the basic layout engine responsive by default
- To make all graphical elements (including the standard icon set) scalable by the use of Web Fonts
- To give more control over the layout in mobile form-factors. In Version 2, mobile layouts would always stack vertically. In version 3, you can be more flexible.

In this series, we’re going to look at some techniques for mobile-first design. We’ll consider the needs of the mobile user versus a desktop or tablet user, along with the impossible challenge of being all things to all people. We’ll see how to use Bootstrap Version 3 to progressively enhance their experience and minimize the impact on power consumption, bandwidth, SEO and accessibility, without adversely impacting the cost of developing and maintaining the code.

We’re also going to see how Bootstrap alone is not enough to meet the technical requirements of a mobile-first design philosophy, and how some simple CSS and JavaScript techniques can be used to help optimize the implementation of your site for a mobile user.

We’re also going to look at how to work with and overcome the constraints of the Bootstrap framework, and learn how to produce semantic HTML, customized to your particular requirements.

We’ll get started by looking at some basic tools of responsive design (including the Boostrap 3 grid system), and then think about what mobile-first means for the creative process.

]]>We’ve learned a bit about computer architecture – all the way up from the transistors that form the physical basis of modern digital computers, to more abstract concepts like data memory, instructions and I/O. We’ve looked at how we can encode information like the positive and negative integers into the on-and-off world of a digital computer, using 1s and 0s (bits) and some of the ways in which the constraints of that digital representation differ from “regular” mathematics.

We’ve also learned about Boolean algebra, and how to construct complex logical propositions from simple operators, as the foundation for our decision making.

We’ve written our first programs, using very low level instructions like the ones provided by the processor manufacturers, and seen how we can estimate the cost of the algorithm those programs embody in terms of both the size of the program and amount of memory it consumes (storage costs), and the time it takes to execute (computational cost).

The complexity of writing programs at that very low level quickly became apparent, and we started to look at a higher-level, more declarative programming language called F# to help us express the intent of code more clearly.

In our first real foray with F#, we learned how to create a function which takes an input value and maps it to an output value. We also saw how we could compose functions (using the output of one function as the input of another function) to solve more complex problems; in our first example we used a function that itself returned a function to simulate a function with multiple parameters, and then used that technique to implement the XOR derived operator in F#.

In this section we’re going to start looking at a more real-world problem, and see how we can use functions to solve it.

Right. Let’s imagine that I’m sick of my job, and I have designated 14:00-15:00 as my official “Lottery Fantasy Hour”. There’s even a cost centre for it on my timesheet.

The average jackpot prize on the UK National Lottery is about £2,000,000. The way it works is that you select 6 numbers from 1…49. On the day of the draw, 6 “main numbers” are drawn. Match all those, and you win the jackpot. The order of the numbers doesn’t matter – just whether they match or not. There’s a load of other extraneous rubbish about a bonus ball that comes in to play if you’ve matched 5 main numbers, but they don’t win you the jackpot. And in my lottery fantasy, it is all about the Jackpot. OK?

So, the question is, what are the odds of my winning the jackpot?

Well, I’ve got 6 numbers out of the 49.

When the draw happens, the chance that the first ball that comes out of the machine is one of mine is therefore 6 out of 49. (There are just 6 balls that could come out, from the 49, that would match one of my numbers.)

If that matched, then I’ve got 5 numbers left; and there are 48 balls in the machine. So the chance that the second ball that comes out is one of mine is now 5 out of 48.

If I’m still in the game, then I’ve got 4 numbers left to choose from, and there are 47 balls still in the machine. So the chance that the third ball that comes out matches one of those is 4 out of 47.

You can see where this is going. If that matched too, then I end up with 3 out of 46, 2 out of 45 and finally (and I’m on the edge of my seat now) 1 last ball from the 44 remaining that could match and win me the big money.

You might remember that when we have probabilities like this, each of which depends on the previous result, we can multiply them together to get the overall probability.

Let’s try that.

That’s one in 13,983,816.

So, I’ve got about a 1 in 14 million chance of winning about 2 million pounds. Hmmm. Doesn’t sound good.

Let’s look at the Euro lottery instead. This is a pick 7 numbers out of 50 draw, and the average jackpot is about £55,000,000.

Applying the same logic as above, we end up with

That’s one in 99,884,400

That makes me about 7 or 8 times less likely to win, but the amount I’m likely to win is over 20 times higher! I can feel my yacht beckoning.

Those numbers are a bit depressing, though. I’m probably going to be sat at my desk doing the Lottery Fantasy Hour until I die. Maybe I need a better approach.

What if I *ran* a lottery instead of playing it? And made it available to everyone in the office?

*Step 1.* Ignore all local gambling laws **[1]**

*Step 2.* Develop a lottery designed to keep people playing.

**[1]** This is not legal advice.

Let’s say there are 30 people in our office, and they all opt in at 1-unit-of-local-currency a week. £1, for instance

Over the course of, say, 10 years, we want someone to win every other week (for that roll-over excitement).

At 30 plays a week, 52 weeks a year, for 10 years, that’s 15,600 plays, and we want a win every other week, which is 260 winners. So the odds of a jackpot win want to be somewhat less than which is 1 in 60.

I also own 10 ping pong balls, a sharpie and a black felt bag. So we’re going to be doing a draw of some-number-of-balls from 10.

So – how can I work out what the odds would be for different numbers of balls in the draw?

Let’s remind ourselves of the odds for drawing 7 from 50.

And 6 from 49

Can we see any patterns?

Let’s take the bottom of the fraction first. That’s a function of the number of balls we get to choose – let’s call that number .

**Spot test:** Can you write out an expression for this function in the form

**Answer:**

We call this function ‘factorial’, and we usually write it as

Now, let’s look at the numerators. They clearly aren’t just factorials, but they seem to be related.

Because is and a bit big to keep in our heads, we’ll pick a smaller example.

Let’s look at picking three numbers from five.

Again, the bottom is easy – as usual, that’s

What about the top? First, let’s multiply out so we can see what sort of number we’re dealing with.

Now, we know that

But we don’t want all of that – we just want . It is too big, to the tune of a factor of

No problem, we can just divide it through.

And we recognize that is a factorial too, and we end up with

It should be obvious where the 5 comes from – that is just the number of balls we’ve got to pick from.

But how did we get the 2? We want to end up with a number of terms equal to the number of balls we’re drawing. So we need to divide out by the factorial of the total number of balls (which we can call ), less the number of balls to pick (which we have been calling ), or .

In this example, , so, as we expected,

So, if we are picking from , our numerator is always

Now, we can go back and combine our denominator with our numerator to provide the equation that allows us to calculate the probability of winning any draw-k-balls-from-n lottery…

**Spot test:** can you substitute our factorials back in to our equation

**Answer:**

We call this the **combination function** as it tells us the number of ways we can pick k items from a set of n items, if the order of selection does not matter.

Sometimes, you see this k-from-n combination function written down like this:

In this form, we call a binomial coefficient, and read it as “n choose k”. There are loads of applications for this – wherever we need to choose a subset of items from some larger set. Lottery fantasies are just one.

OK, so given a particular number of balls (n), I could use this function to display a table that shows me the odds of winning the jackpot, given a particular number of balls in the draw (k).

In the interests of not getting bored, we can turn this into an F# function:

Something like

`let lotteryOdds n k = factorial n / (factorial k * factorial (n-k))`

That’s a good start – but it won’t work just yet. We have to implement that factorial function.

One way to do that is to use a very powerful tool called recursion.

Let’s look back at our factorial function again.

What about the expression for Can you write out its expansion in the same way?

They look really similar – in fact:

To explore this further, let’s see if we can write that as a function in F#

Here’s a first effort.

`let factorial x = x * factorial (x-1)`

Notice that we’re calling the factorial function from within the definition of the factorial function itself! This is what we call **recursion**.

Unfortunately, if we try that, F# comes back with an error:

`let factorial x = x * factorial (x-1);;`

` ----------------------^^^^^^^^^`

`stdin(3,23): error FS0039: The value or constructor 'factorial' is not defined`

In some languages (C++, C# or Java for instance), this wouldn’t be an error, but in F# there’s a special bit of syntax we use to specify that a function can be called recursively. We have to add the keyword `rec`

.

So here’s our second go.

`let rec factorial x = x * factorial (x-1)`

OK – F# responds happily with

`val factorial : x:int -> int`

BUT! Before we call it, let’s work this through on paper for a simple example and see what happens. We’ll write each recursive call on a separate line, and indent so we can see what is happening.

`factorial 5 = `

` 5 * `

` (5-1) *`

` (4-1) *`

` (3-1) *`

` (2-1) *`

` (1-1) *`

` (0-1) *`

` (-1-1) *`

` ...`

Oh dear! This going to go on for ever! We need it to stop, eventually. The problem is that whenever we’ve been doing factorials by hand, we’ve stopped before we spill over into the negative integers.

How can we persuade our function to stop?

Well, we’re missing one important fact about factorials. When we get to 0!, we say that it is, by definition, 1, and is not, therefore, defined in terms of the factorial of x-1. This gives our recursion an end. That means our original attempt to define a factorial function was wrong – it should have looked more like this:

`let rec factorial x =`

` match x with`

` | 0 -> 1`

` | x when x > 0 -> x * factorial(x-1)`

` | _ -> failwith "You cannot calculate the factorial of a negative number using this function."`

The keyword here is `match`

– we’re going to `match x with`

a variety of different patterns.

Note that we start each pattern definition on a new (indented) line with the vertical pipe symbol `|`

. (This kind of looks a bit like our big curly bracket in our mathsy version of the expression.)

As I mentioned, there are a variety of different patterns we can use, and in this function, we use all three kinds. Let’s look at each one in turn.

`| 0 -> 1`

This one is fairly straightforward; we can read it as “if x is 0, then the match goes to 1″. We can use this match for any particular value of x. For example, we could hard wire the result for 5! if we wanted, by adding the additional match:

`| 5 -> 120`

(We won’t, though.)

The second match is a slightly more complex expression

`| x when x > 0 -> x * factorial(x-1)`

We can read that as “for all values of x when x is greater than 0, the match goes to our recursive factorial function call.”

What about the last one?

`| _ -> failwith "You cannot calculate the factorial of a negative number using this function."`

This `_`

symbol we use to mean “for all other cases”, and in this example we’re using a special F# function called `failwith`

which raises an error with a message. Notice that we’ve put the message in quotation marks – this marks it out as a **string** – which is a way of representing text in the computer. We’ll have more on that later.

Of course, you don’t have to use the match keyword in recursive functions alone. In signal processing, there’s a thing called a high-pass filter. If the signal is above a certain frequency, then it does nothing, otherwise it attenuates the signal to zero. Think of it a bit like a bass-cut button on your hifi – it leaves the high-frequency sound alone, but trims out the low-frequency signals.

We could write down a function for this:

And then convert this into an F# function

`let hipass x xmax =`

` match x with`

` | x when x < xmax -> 0`

` | _ -> x`

Let’s give that a go. If the signal is 10 and the high-pass filter is set at 5, we get:

`hipass 10 5`

F# responds

`val it : int = 10`

Good – so above the threshold our original number is passed through.

Let’s try one below the threshold.

`hipass 2 5`

`val it : int = 0`

**Spot test:** What will the response be if I choose a value exactly at the threshold?

**Answer:**

`hipass 5 5`

`val it : int = 5`

Is that what you expected? Notice that the threshold specifies strictly less than, so values at the threshold will be passed through.

Ok, back to our factorial function.

`let rec factorial x =`

` match x with`

` | 0 -> 1`

` | x when x > 0 -> x * factorial(x-1)`

` | _ -> failwith "You cannot calculate the factorial of a negative number using this function."`

Let’s try it out.

`factorial 5`

`val it : int = 120`

So far, so good!

What about

`factorial 50`

`val it : int = 0`

Zero? What’s happened here? Well, we’re back to the problem of representing numbers in computer memory again. Remember that a signed 32 bit integer can store a number up to in magnitude. is approximately – somewhat larger than we can cope with! Larger, even, than a 64 bit integer could represent. In fact, a signed 128bit integer would still be too small. We’d need to double up again to a signed 256-bit integer to cope with a number this large.

(Factorials get really big, really quickly – and that will be important again, later.)

Clearly, we’re going to have to look at using a different data type to represent our numbers. F# provides us with a type called `bigint`

which can be used to represent arbitrarily large numbers (limited, more or less, by the amount of data memory in your system.)

We need to represent this very large number in the output of our factorial function, but we’re still happy to pass an integer in as our parameter. Here’s a go at defining the function to use `bigint`

`let rec factorial x =`

` match x with`

` | 0 -> 1I`

` | x when x > 0 -> bigint(x) * factorial(x-1)`

` | _ -> failwith "You cannot calculate the factorial of a negative number using this function."`

Try that, and F# responds

`val factorial : x:int -> System.Numerics.BigInteger`

So this is now a function that takes an `int`

and returns a `System.Numerics.BigInteger`

. (That’s the fullname for our `bigint`

type.)

There are two interesting lines in that function definition. First, there’s the one where we map the integer 0 to the bigint value 1.

`| 0 -> 1I`

Notice that we use the suffix `I`

to indicate that this number should be interpreted as a big integer, rather than a regular 32 bit integer.

The other line of interest is in the factorial function:

`| x when x > 0 -> bigint(x) * factorial(x-1)`

The value `x`

is an integer, and we use this syntax `bigint(x)`

to convert it from an `int`

into a `bigint`

. We call this kind of conversion a **type cast**. We’ll have a lot more about types later in the series.

Let’s try that out.

`factorial 50`

`val it : System.Numerics.BigInteger =`

` 30414093201713378043612608166064768844377641568960512000000000000`

` {IsEven = true;`

` IsOne = false;`

` IsPowerOfTwo = false;`

` IsZero = false;`

` Sign = 1;}`

That looks like a very big number. Notice that `bigint`

also seems to have a bunch of other values associated with it – whether it is even, whether it is a power of two, its sign, etc. We’ll learn more about that when we come to explore types.

OK, that’s great. We can now calculate the factorial of lottery-sized numbers. But is this recursive technique the best way to do it?

What happens if we try an even bigger number? Our BigInteger should be able to cope with the result, but let’s see what happens if we try to calculate

`factorial 1000000`

`Process is terminated due to StackOverflowException.`

Ouch.

Next time, we’re going to find out why that blew up so spectacularly.

Learning To Program – A Beginners Guide – Part One – Introduction

Learning To Program – A Beginners Guide – Part Two – Setting Up

Learning To Program – A Beginners Guide – Part Three – What is a computer?

Learning To Program – A Beginners Guide – Part Four – A simple model of a computer

Learning To Program – A Beginners Guide – Part Five – Running a program

Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms

Learning To Program – A Beginners Guide – Part Seven – Representing Numbers

Learning To Program – A Beginners Guide – Part Eight – Working With Logic

Learning To Program – A Beginners Guide – Part Nine – Introducing Functions

Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#

Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions

Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#

**Exercise 1: Remember the exercises in our first introduction to algorithms? Can you implement functions in F# for the sum of an arithmetic series and the sum of a geometric series?
**

Remember that this is the formula for an arithmetic series:

where is the difference between each term, is the index of the first term in the progression that we want to include in the sum, and is the index of the last term we want to include; is the value of the first term, and is the value of the last term.

So, for the progression 1, 3, 5, 7, 9, 11

is 2

is 1 (from the 1st term)

is 6 (to the 6th term)

is 1

Is 11

We could define this function in F# as follows:

`let arithmeticseries m n am an = (((n - m) + 1) * (am + an)) / 2`

We’ve defined it with 4 parameters – a record, and bound it to an identifier called `arithmeticseries`

(although, of course, we could have picked any name we liked).

What do you think F# will respond for this definition?

`val arithmeticseries : m:int -> n:int -> am:int -> an:int -> int`

So, this is a function that takes an integer, and returns a function that takes an integer and returns a function that takes and integer and returns a function that takes an integer and returns an integer! A bit of a mouthful, but the same pattern as our “two parameter” function, and no more difficult to deal with!

Let’s try out our function on our example progression (whose sum is, incidentally 1+3+5+7+9+11=36)

`arithmeticseries 1 6 1 11`

`val it : int = 36`

Looks good! Let’s move on to the second part of this exercise, the geometric progression.

Remember that a geometric progression is one in which each term is a constant multiple of the previous term: , , , …

For example is a geometric sequence where and its sum is 93.

The formula for the sum of a geometric progression where a is the value of the first term in the sequence, and r is the constant multiplier is:

In the hints for this exercise, we mentioned a function called `pown`

which raises one value to the power of another. Let’s use that that to translate that formula into the definition of a function for a geometric series.

`let geometricseries m n a r = (a * (1 - pown r ((n - m) + 1))) / (1 - r)`

F# responds

`val geometricseries : m:int -> n:int -> a:int -> r:int -> int`

Now we can test it out.

`geometricseries 1 5 3 2`

`val it : int = 93`

Did you get that? If so, that’s definitely a moment of triumph. You’ve translated a fairly complex mathematical formula into a neat little function in F#. If not, go back and see how the F# function definition maps on to the mathematical expression. Don’t forget that the `pown`

function is applied to the parameter(s) immediately to its right. Once you’ve worked it out, take a few seconds to bask in the moment triumph, then move on!

**Exercise 2: Another derived Boolean operator is called the **equivalence **operator. It is true if the two operands are equal, otherwise it is false. First, draw out the truth table for the equivalence operator. Then, work out a compact Boolean expression for it. Finally, implement the equivalence operator as an F# operator.**

Here’s the truth table for the equivalence operator (for which we use the symbol )

true | true | true |

false | true | false |

true | false | false |

false | false | true |

Compare this with the truth table for XOR

true | true | false |

false | true | true |

true | false | true |

false | false | false |

Can you see that ?

This gives us a big hint as to how we could implement it – by applying the not operator to our Boolean expression for XOR.

We could write that in F# as:

`let (|==|) x y = not ((x || y) && not (x && y))`

F# responds as you might expect for a standard “two parameter” function:

`val ( |==| ) : x:bool -> y:bool -> bool`

(You might have picked a different identifier for your operator, of course – the choice is yours.)

Let’s test that out by reproducing the truth table.

`true |==| true`

`val it : bool = true`

`true |==| false`

`val it : bool = false`

`false |==| true`

`val it : bool = false`

`false |==| false`

`val it : bool = true`

So far so good! It works, but it is a little more unwieldy than the XOR definition – we’ve added an extra term. Given that they are so similar, should it not be possible to express equivalence just as succinctly as we did XOR? The answer is yes, but to do that, we need to learn some more of the rules of Boolean algebra.

In regular maths, you’re probably so familiar with the rules of algebra, that you don’t even think about them as being laws at all, just “the way things are”. But there’s nothing magic about them – they’re just rules people have made up to try to create a consistent system of mathematics. Brace yourself. There’s a lot of detail coming up, so take it slowly and experiment with the rules as we come across them.

Two of the most familiar are called **associativity** and **commutativity**. Don’t be put off by the names if you haven’t heard them before – you’ll recognize them when you see them. Here’s an example of the law of **associativity** for addition.

You’re probably thinking “well, obviously!”. We saw a similar example when we were talking about operator precedence. If so, good – this should be obvious!

**Spot test:** give an example of the law of associativity for multiplication.

**Answer:**

Now, **commutativity**. This is the idea that the ordering of the operands doesn’t matter. Here’s an example for addition.

**Spot test:** give an example of the law of commutativity for multiplication.

**Answer:**

Again, you’re probably thinking that this is painfully obvious stuff.

OK – so let’s look at something a bit more complicated. What about **distributivity**? This is the idea that multiplication “distributes” over addition – like this:

**Spot test:** Does addition distribute over multiplication?

**Answer:** No, it doesn’t

Another law is called **identity**. This is the notion that there is some operation that results in the original operand.

Here’s the identity law for addition.

**Spot test:** what is the identity law for multiplication?

**Answer:**

One last common law is the **annihilator** for multiplication. If you multiply anything by zero, you get zero.

Notice how this “annihilates” the term from the result.

We use these rules all the time to help us manipulate algebraic expressions. Remember when we were trying to derive a formula for an arithmetic progression? Amongst other things, we specifically used the fact that we could write the whole expression forwards or backwards, and that this would be equivalent – this relied on commutativity.

In our previous section on Boolean logic, we noted that the Boolean operator is broadly equivalent (in regular algebra) to multiplication, and the operator is equivalent to addition. This similarity holds true for all of these laws, for which there are equivalents in Boolean algebra.

**Spot test:** can you write out the laws of associativity, commutativity, distributivity, identity and annihilation for the Boolean operators and ?

**Answer:**

*Associativity*

*Commutativity*

*Distributivity*

*Identity*

*Annihilation*

Did you get that lot? Take a moment of triumph! If not, go back and look at the laws for regular algebra, and the equivalence of AND and multiplication, OR and addition, and see if you can work them out.

With practice, these laws of Boolean algebra will become just as ‘obvious’ as their equivalents in regular algebra. As usual, the laws aren’t complicated, but the symbols take some getting used to.

Of course, Boolean algebra is not exactly equivalent to the algebra you already know. It adds a few laws of its own.

First, there’s **idempotence**. This is the idea that if the inputs to the operator are the same, then the output is the same as the input.

This is very different from the equivalent expressions in regular algebra, where

(So multiplication and division are not idempotent!)

Another law is called **absorption**. Let’s have a look at the expressions, and you’ll see why it got that name.

It is as if the AND operator “absorbs” the OR expression that follows (and vice-versa).

There’s also a wrinkle with **distribution**. A minute ago, we saw how in regular arithmetic, multiplication distributed over addition

And in Boolean algebra, the equivalent AND distributes over OR

And while addition doesn’t distribute over multiplication…

In Boolean algebra, OR does distribute over AND

Another place in which Boolean algebra is more symmetrical than the algebra we know and love is annihilation. There is also an **annihilator** for OR, as well as the one for AND.

Still with me? We’re nearly done; there’s one more set of laws to look at. We’ve covered AND and OR, addition and multiplication, but we’ve not yet had anything to say about negation.

As usual, the laws of multiplying and adding with negation in regular algebra are so familiar they seem to be stating the obvious.

The first is the familiar “two negatives make a positive” rule for multiplication, the third is “double negation”. The second tells us that adding together two negated values is equivalent to adding together the two values and negating the result.

But in Boolean logic, the basic rules are a bit different, and called **complementation**.

We can also write down a couple of rules derived from these laws of complementation called **de Morgan’s laws**. These are really interesting because they allow us to express the AND operator purely in terms of the OR operator and negation; and vice-versa.

Now – let’s go back to where we started and look at our expression for the equivalence operator.

If we say

and

Then we could rewrite this as

But looking at de Morgan’s laws above, we can see that this is equivalent to

Now

and

We know the rule for double negation – it is one of our complementation rules above. So it follows that

We can substitute this back into our equation

Remember our original expression for XOR?

We can use the law of commutativity to swap our expression for equivalence into the same form:

This is clearly simpler than the original form, and we say that we have **minimized** the expression.

When you get used the laws, you could have done this in one quick step; you’d remember de Morgan’s laws, negate the two terms either side of the central AND operator, and flip the operator from AND to OR. This is probably the most common day-to-day Boolean minimization you’ll carry out on real-world expressions.

Let’s check that it is still correct by implementing it in F#.

**Spot test:** Can you implement this new form for the equivalence operator in F#?

**Answer:**

`let (|==|) x y = (x && y) || not (x || y)`

F# responds with

`val ( |==| ) : x:bool -> y:bool -> bool`

So the form of our operator is, of course, still correct. Let’s check that it does what we expect!

`true |==| true`

`val it : bool = true`

`true |==| false`

`val it : bool = false`

`false |==| true`

`val it : bool = false`

`false |==| false`

`val it : bool = true`

There are various systematic approaches to minimization – from the repeated application of these laws of algebra, to something called a **Karnaugh Map**.

One simple way to minimize a function is to apply the law of complementation. Specifically, you look to rearrange any Boolean expression to generate terms that look like this:

Those terms can then be immediately eliminated.

For example, consider this slightly brain-bending expression:

We can rearrange this to something much simpler

No – really; we can! First let’s say

We can then rewrite the first expression as

Applying our law of commutativity on each term, this is the same as

We recognize this as an example where our distribution law applies, so that becomes

And since

This becomes

Substituting the value of a back in, we get

Which is just

We went from the brain bending to the simple in a few sort-of-easy steps.

Let’s check that they’re actually the same, using F# to build the two truth tables. First for the complex expression.

**Spot test:** create an F# function to implement , and then build the truth table for the expression.

**Answer:**

`let expression1 x y z = (x && y && not z) || (not x && y && not z)`

x | y | z | |

1 | 1 | 1 | 0 |

1 | 1 | 0 | 1 |

1 | 0 | 1 | 0 |

1 | 0 | 0 | 0 |

0 | 1 | 1 | 0 |

0 | 1 | 0 | 1 |

0 | 0 | 1 | 0 |

0 | 0 | 0 | 0 |

Notice how the results in the truth table don’t depend on the value of x at all – this is a good sign that we can eliminate x entirely.

Let’s try our second expression

**Spot test:** create an F# function to implement , and then build the truth table for the expression.

**Answer:**

`let expression2 y z = y && not z`

y | z | |

1 | 1 | 0 |

1 | 0 | 1 |

0 | 1 | 0 |

0 | 0 | 0 |

Success!

This is an excellent question. You’ll often hear arguments that it is “more efficient” to minimize the expression – but for anything but the most extraordinary expressions (or implementation in discrete electronic components) this is a bit of a side issue.

The usual reason to minimize is to make it more *comprehensible*. Human brains are not great at double negatives and extremely long chains of reasoning, so a compact expression is generally more understandable.

However, this can also be a good reason *not* to minimize. If you have well-defined clauses (wrapped in parentheses and indented neatly) that mean something obvious individually, then it may be better to leave them un-minimized.

In the case of the equivalence operator, you might make the argument that the expanded version made it clear that it was the complement of the XOR operator – but I think that argument is a little weak. The minimized version is simple to read, and the two expressions are recognizably similar. When you get used to Boolean algebra, you will also recognize that they have the form of complementary expressions: they are otherwise identical, but all the ANDs and ORs are swapped around.

In the second example, the benefits of minimization were clear – we eliminated one entire variable, and made the expression much simpler to read.

So, the general rule for minimization is to make it as compact and understandable as possible.

OK, that was a lot of information. There’s no substitute for some practice with this stuff. For most developers, the rules of logic eventually become second nature, just like the ones for regular algebra. (So much so, that they often forget that they’ve learned them!)

With that in mind, here are a couple of exercises. The answers are at the bottom of the page.

**Exercise 1: Minimize the following Boolean expressions**

a)

b)

c)

**Exercise 2: Implement F# functions for the expressions above (both minimized and as originally stated), and verify their truth tables.**

Learning To Program – A Beginners Guide – Part One – Introduction

Learning To Program – A Beginners Guide – Part Two – Setting Up

Learning To Program – A Beginners Guide – Part Three – What is a computer?

Learning To Program – A Beginners Guide – Part Four – A simple model of a computer

Learning To Program – A Beginners Guide – Part Five – Running a program

Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms

Learning To Program – A Beginners Guide – Part Seven – Representing Numbers

Learning To Program – A Beginners Guide – Part Eight – Working With Logic

Learning To Program – A Beginners Guide – Part Nine – Introducing Functions

Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#

Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions

Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#

**Exercise 1**

a)

b) , or alternatively,

c)

**Exercise 2**

a1) `let exa1 x y z = (x || not y) && (z && not y)`

a2) `let exa2 y z = not y && z`

b1) `let exb1 x y z = not ((x || y) && (z || not y))`

b2) `let exb2 x y z = (not x && not y) || (y && not z)`

or `(not x || y) && (not y || not z)`

c1) `let exc1 x y z = (x && not y) || not ((x || y) && (z || not y))`

c2) `let exc2 y z = (not y) || (not z)`

We’re going to build on that this time, so it might be a good idea to go over the key points:

1) A function takes exactly one input (parameter) and produces one output (result). We can write this as

2) We can bind a function to an identifier using the let keyword.

`let increment x = x + 1`

3) We can define a function (with or without a binding) by using a lambda

`fun x -> x + 1`

4) A function is applied to the value to its immediate right

`increment 3`

5) A function doesn’t have to return a simple value; it can return a function too

`let add x = fun y -> x + y`

6) Applying 4) and 5) allows us to create functions which appear to take multiple parameters. F# has shorthand syntax to help

`let add x y = x + y`

`add 2 3`

7) We can still capture the intermediate function, effectively binding one of its parameters. We call this ‘currying’

`let add2 = add 2`

Finally, we left off with an exercise.

**Exercise: Create a function that applies the logical XOR operator we worked out in the previous section.**

Remember that we learned that the derived operator XOR can be constructed from AND, OR and NOT operators like this:

You will probably also remember that the F# symbol for the logical AND operator is `&&`

, OR is `||`

and NOT is `not`

. Knowing what we do about functions, we can define a function for the XOR operator.

**Spot test:** Define a function bound to the identifier `xor`

which implements the XOR operator.

As usual, give it a go yourself before you look at the answer. Check it out in your F# environment.

**Answer:**

`let xor x y = (x || y) && not (x && y)`

Try that, and F# responds

`val xor : x:bool -> y:bool -> bool`

So, we have defined a function bound to an identifier called xor that takes a boolean, and returns a function that takes a boolean and returns a boolean – our usual pattern for a function that “takes two parameters”.

We can now make use of this to build the truth table for XOR, tidying up a loose end from our section on logic.

`xor false false`

`val it : bool = false`

`xor true false`

`val it : bool = true`

`xor false true`

`val it : bool = true`

`xor true true`

`val it : bool = false`

That’s a good start, but it doesn’t look quite right. We’re calling our `xor`

function in the standard way: applying the function to the value to its right (or **prefix **syntax). But the similar operators `||`

and `&&`

appear between the parameters, which we call **infix **syntax.

F# provides us with a means of defining a special kind of function called, unsurprisingly, an **operator**, which works in just this way.

Defining an operator is just like defining a function – with a couple of little wrinkles.

The first wrinkle is the name – the name has to consist of some sequence of these characters:

`!`

, `%`

, `&`

, `*`

, `+`

, `-`

, `.`

, `/`

, `<`

, `=`

, `>`

, `?`

, `@`

, `^`

and `|`

The second wrinkle is to do with operator precedence. You’ll remember in the section on logic that we discussed how multiplication and division take precedence over addition and subtraction, and that logical AND takes precedence over logic OR. The precedence of a custom operator that we define is determined by the characters we use in its identifier. This can be a bit tricky to get used to!

For XOR we want a name that reminds us of the XOR symbol but which takes the same kind of precedence as OR. Let’s use `|+|`

. It has got the pipe characters of OR, along with a plus symbol, so it looks vaguely similar.

So – how do we define an operator? As you might expect, the syntax is very similar to a function:

`let (`

*{identifier}*) x y = *{function body}*

And here’s how we might define our XOR operator:

`let (|+|) x y = (x || y) && not (x && y)`

Just like a regular function binding to an identifier, except that we’re wrapping the identifier in parentheses (round brackets).

**Spot test:** What do you think F# will respond?

**Answer:** This is basically just our standard “two parameter” function pattern, so you’d expect a function that takes a boolean, and returns a function that takes a boolean and returns a boolean. And that’s just what we get. Notice that the round brackets are still shown around the identifier.

`val ( |+| ) : x:bool -> y:bool -> bool`

Now, though, we can try out our infix XOR operator.

`true |+| false`

`val it : bool = true`

`true |+| true`

`val it : bool = false`

So, now we know how to define functions and operators, and we’re armed with a basic knowledge of logic, we can go on to try to solve some more complex problems. But first, a couple of exercises.

**Exercise 1: Another derived boolean operator is called the equivalence operator. It is true if the two operands are equal, otherwise it is false. First, draw out the truth table for the equivalence operator. Then, work out a compact boolean expression for it. Finally, implement the equivalence operator as an F# operator.
**

*Hint: What is the relationship between the equivalence operator and the exclusive or operator?*

**Exercise 2: Remember the exercises in our first introduction to algorithms? Can you implement functions in F# for the sum of an arithmetic series and the sum of a geometric series?**

*Hint: It is probably useful to know that, in addition to* `+`

*and* `-`

*, F# uses *`/`

*for division and* `*`

*for multiplication. These are all infix operators. There is also a function called* `pown`

*which is of the familiar “two parameter” prefix style, and raises one value to the power of another. Here’s* *, for example:*

`pown 2 3`

`val it : int = 8`

(Answers will be at the start of next week’s instalment)

Learning To Program – A Beginners Guide – Part One – Introduction

Learning To Program – A Beginners Guide – Part Two – Setting Up

Learning To Program – A Beginners Guide – Part Three – What is a computer?

Learning To Program – A Beginners Guide – Part Four – A simple model of a computer

Learning To Program – A Beginners Guide – Part Five – Running a program

Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms

Learning To Program – A Beginners Guide – Part Seven – Representing Numbers

Learning To Program – A Beginners Guide – Part Eight – Working With Logic

Learning To Program – A Beginners Guide – Part Nine – Introducing Functions

Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#

Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions

Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#

This is where functions come in.

A **function** is the smallest useful unit of program, that takes *exactly one input*, and transforms it into *exactly one output*. Here’s a block diagram to represent that.

(It’s worth noting that when we’re talking about inputs and outputs in the context of a function, these aren’t the I/O operations of our block diagram back in the simple model of a computer – the keyboards, monitors, printers and so forth. The input is just a parameter of the function and the output is its result.)

As usual, there are lots of ways of describing a function. You might have seen this block diagram expressed in a more compact mathematical form:

You can read that as ‘x’ ‘goes to’ ‘f-of-x’. Mapping that on to the diagram above, you can see that x is the input parameter, f is our function, and f(x) is our output – the result of applying the function f to x.

Let’s get more specific. How could we describe a function whose result is the value of the input, plus 1?

So, as you might expect, you can read that as ‘x’ ‘goes to’ ‘x plus 1′. And by observation, we can say that

So, that’s the block diagram and the general mathematical form. What about some code?

Different programming languages have all sorts of different ways of defining a function. We’re going to focus on F# for our examples. The syntax may vary from language to language, but the principles are the same. If you learn the principles, you can apply that knowledge to any code you come across.

It’s not just different languages that introduce some variety in to the way you can define a function, though. There are lots of ways of defining a function in F#, depending on the context. We’re going to avoid some of that detail, for the time being, and start out with a simple example, where we define a function and bind it to an identifier, so that we can call it as often as we like.

There’s a lot of seemingly innocuous English words in that last sentence, like ‘call’ and ‘bind’. But what does it actually mean?

We say that we **call** a function when we give it an input, and ask it to evaluate the output for us.

An **identifier** is a named value, and when we **bind** a function (or other value) to an identifier, we associate the name with that function or value (forever!).

Here’s a simple example of a binding in F#. You can start up your F# runtime environment (`fsi`

or `fsharpi`

) and try it out.

`let x = 3`

(don’t forget to type `;;`

when you want the runtime to evaluate your input.)

So, the syntax for a binding is as follows:

**let** {identifier} **=** {value}

F# responds with

`val x : int = 3`

You can read the result as ‘the value called x is a 32-bit integer which equals 3′.

What about this notion of a binding being forever? Let’s experiment with that by binding 3 to the identifier y, and then trying to bind 4 to that same identifier.

`let y = 3`

`let y = 4`

We want F# to evaluate both of these lines as a block, so we type the first line, press return, and type the second line, before typing our usual ;; to kick off the evaluation.

F# responds with

` let x = 4`

` ----^`

`stdin(4,5): error FS0037: Duplicate definition of value 'x'`

This tells us that F# is not happy. Our second attempt to bind the identifier x was an error. It has even drawn a handy arrow to the bit of our statement that was wrong. You’ll get very familiar with F# errors before we’re done!

We don’t have to bind a simple value to an identifier, though, we can bind a function, too.

Remember that when we bind a simple value to an identifier, we use the syntax

`let {identifier} = {value}`

But to bind a function to an identifier, we need to include a name for the input parameter too, so we use the syntax

`let {identifier} {parameter} = {function body}`

Let’s try it:

`let increment x = x + 1`

F# responds with:

`val increment : x:int -> int`

We can read that as ‘the value called increment is a function which takes a 32-bit integer (called x, as it happens), and goes to a 32bit integer.’

We can then use the function. F# applies a function (`increment`

, in this case) to the value immediately to its right, which it uses as its input parameter.

`increment 3`

F# responds:

`val it : int = 4`

**Spot test:** We learned how to read that kind of response in the previous section. What does it mean?

**Answer:** The result (‘the value called it’) is a 32 bit integer which equals 4.

So far so good! We’ve created our first function.

A function doesn’t have to map its input to a number, though. One thing we could return from a function is another function. Let’s have a look at an example of that.

`let add x = fun y -> x + y`

If we execute that line, F# responds with:

`val add : x:int -> y:int -> int`

Can we read that? The value called add is a function which takes a 32 bit integer called x, and returns a function which takes a 32 bit integer called y, which returns a 32bit integer.

It’s a bit long winded, but it’s quite straightforward if you follow it through, carefully. Read it a couple of times and see how it matches up with the F# response above.

Hang on a minute, you may be thinking. How exactly did we define the function that was returned? Let’s remind ourselves of the syntax for binding a function to an identifier again.

`let {identifier} {parameter} = {function body}`

In this case, the function body itself returns a new function. By inspection, this must be the code which defines the new function that form the result:

`fun y -> x + y`

First, you can see that we are clearly not binding this function to an identifier. There’s nothing to say we couldn’t – we just haven’t. Secondly, we’re using a different syntax to define the function. We call this new syntax a **lambda**.

Compare it with the maths-y way of defining a function we looked at right at the beginning of this section

It’s remarkably similar, but, as there isn’t an key on our computer, the designers of F# have used `->`

as a cute replacement. We also add the prefix `fun`

to tell F# that we are starting to define a function.

So, how do we read it?

`fun y -> x + y`

This means ‘a function which takes a parameter called y, and goes to some value x plus y’.

This seems to be a bit lacking in information. Where do we get the x from, for a start?

Remember that our whole definition was:

`let add x = fun y -> x + y`

We say that the lambda function is being defined in the **scope** of the definition of the add function, or that the add function is the **parent** (or an **outer**) scope of the lambda (which is, conversely, a **child** or an **inner** scope of the add function). You can think of scopes like a series of nested boxes. An inner scope has access to the identifiers defined in any outer scope which contains it, but an outer scope cannot reach into an inner scope and rummage in its innards.

So, our lambda gets its x value from the outer scope – the parameter to the add function.

And what about an identifier for this function? Well, it doesn’t have one. We’ve not bound it to an identifier anywhere. We call this an **anonymous function**. Why doesn’t it have a identifier? Well, it doesn’t really need one. As we said above, there’s no way to ‘reach inside’ the body of the function and fish it out, so we don’t need to bother binding it to an identifier when we define it.

As we mentioned before, there’s nothing to stop us binding it to an identifier along the way, if it would make our code clearer. Here’s an example of that

`let add x =`

` let inner = fun y ->`

` x + y`

` inner`

Notice how the function body now extends over several lines, so we’ve had to indent it with spaces to tell the F# compiler that you should treat this all as a single block. If you look back up at our box diagram for the scopes, you can see that it looks quite similar – we just haven’t drawn the boxes! We’re still creating a lambda, but binding it to an identifier called inner, and then returning the value of that identifier, rather than the returning the lambda directly.

*Quick note: You might be tempted to use a tab to indent it neatly- that won’t work. F# requires that you use spaces. If you’re using an editor to write your F# code, make sure you have set it to “convert tabs to spaces” mode, and then you can hit the tab key to indent.*

That’s a lot more verbose, and, in such a simple example, not really any clearer, so we’ll prefer the definition in our original form.

`let add x = fun y -> x + y`

In earlier sections, we talked about the fact that the choices you make when you choose how to implement an algorithm can influence the cost of that algorithm quite significantly – be that in terms of program length, memory consumption or computational effort. But there’s another way in which your choice of implementation can affect the system, and that’s in how easy it is to understand.

We may spend a long time carefully crafting some code, but if we come back to that code later and can’t work out how it works by looking at it, then we may misinterpret what it does, or how to set it up, or what its constraints might be. This is a rich source of errors in our programs! So, unless there is some absolutely critical reason why we shouldn’t (and there almost never is), we prefer to use short, simple functions with a minimum of extraneous detail, that do exactly one thing, and without side-effects in the rest of the system.

Ok so we’ve created a function that returns a function. What earthly use is that? Well, one way we can use it is to create a whole family of functions.

Try the following

`let add1 = add 1`

**Spot test:** What do you think F# is going to respond? Try and work it out before you look below.

*Hint: We’re binding something to an identifier called add1, and the body of the binding is a call to the add function we just defined, where the value passed as the parameter called x is 1. Remember that the add function returns another function.*

**Answer:**

`val add1 : (int -> int)`

We know that we can read that as “the identifier add1 is bound to a function that takes a 32bit integer, and returns a 32bit integer”.

Let’s try another

`let add2 = add 2`

What’s F# going to respond?

`val add2 : (int -> int)`

Clearly, we could do this for any arbitrary integer we wanted.

And how do we use any of these functions we’re creating? Well, `add2`

is an identifier bound to a function which takes an integer, and returns an integer, so there’s no magic to that.

**Spot test:** How would we use this function to calculate 2 + 3?

**Answer:**

`add2 3`

`val it : int = 5`

**Spot test:** What is 3657936 + 224890?

*Hint: Don’t use your calculator! Use our fabulous integer addition function factory!*

**Answer:**

`let add3657936 = add 3657936`

`add3657936 224890`

F# responds with

`val add3657936 : (int -> int)`

`val it : int = 3882826`

In fact, we can leave out that intermediate function binding entirely.

Try the following:

`add 3 4`

F# responds with

`val it : int = 7`

Excellent! But how did that work?

Well, remember that F# applies a function to the value immediately to its right, which it uses as the parameter to the function.

First, it applied the `add`

function to the value to its right (the 32bit integer 3) and that, as you know, returns another function. So it took that resulting function and applied it to the value to *its* right (the 32bit integer 4), resulting in our answer: 7.

This is a very interesting result. The net effect is that we can add any two numbers together, even though any given function can only take one input, and produce one result, by using a function that returns a function.

This is such a useful pattern (and writing it all out by hand is a bit of a drag) that F# gives us a shorthand.

Instead of

`let add x = fun y -> x + y`

We can type

`let add x y = x + y`

F# responds

`val add : x:int -> y:int -> int`

Notice that this is exactly the same as our original definition – the value called add is a function that takes a 32bit integer and returns a function that takes a 32bit integer and returns an int.

And we can still type

`add 3 4`

to get

`val it : int = 7`

In this new shorthand syntax, we can just think of the function as taking several parameters – two in this case – but we know that what is really happening under the covers is that we are creating a function that takes the first parameter, and returns a function that takes the second parameter (and uses the first parameter, from the outer scope, in its body).

And if we need to, we can still capture that intermediate function, with its first parameter bound to a particular value.

`let add342 = add 342`

`val add342 : (int -> int)`

We call this binding of a subset of the parameters of a function **currying** and we’ll learn more about that in a later section.

**Exercise: Create a function that applies the logical XOR operator we worked out in the previous section.**

In the next section we’ll look at the answer to that exercise, and how we can make the function work more like the other logical F# operators we’ve already used.

Learning To Program – A Beginners Guide – Part Two – Setting Up

Learning To Program – A Beginners Guide – Part Three – What is a computer?

Learning To Program – A Beginners Guide – Part Four – A simple model of a computer

Learning To Program – A Beginners Guide – Part Five – Running a program

Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms

Learning To Program – A Beginners Guide – Part Seven – Representing Numbers

Learning To Program – A Beginners Guide – Part Eight – Working With Logic

Learning To Program – A Beginners Guide – Part Nine – Introducing Functions

Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#

Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions

Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#

One of the most common questions that has come from that post is “how do I achieve a section with a full-width bleed (e.g. for a full-width background), part way down a page?”

Something that looks roughly like this:

We’re going to deal with that now. Here’s the basic recipe:

That’s the secret sauce.

This is the extra slice of cheese.

This is the dangerous amount of jalapeno you sneak in under some shredded lettuce.

If you’re looking for a full-width background, for example, then you want to nest containers like this to provide a responsive pseudo-fixed-width container (which will appear inline with the rest of your responsive pseudo-fixed-width content), embedded within a full-width responsive background layer.

(Bootstrap3)

```
<div class="container-fluid">
<div class="row">
<div class="col-sm-12">
<div class="container">
<div class="row">
<div class="col-sm-6">
<p>Nunc congue, enim nec faucibus rutrum, orci magna bibendum odio, nec euismod lectus neque at felis. Vestibulum lectus arcu, aliquet vel vulputate sed, aliquet convallis massa. Aenean urna ante, pretium in pellentesque at, posuere vitae urna. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec in ipsum urna, id aliquet erat. Aliquam id nisi eu nunc pulvinar faucibus quis quis erat. Aliquam placerat auctor lectus, sit amet consectetur ipsum lacinia sit amet.</p>
<p>Duis vulputate bibendum elementum. Phasellus eu sodales ligula. Vivamus at justo mauris, id pretium libero. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Maecenas hendrerit dolor sit amet urna lacinia porttitor. Integer sed gravida turpis. Etiam varius nulla nulla, volutpat porta risus. Nullam sollicitudin augue posuere nisl sollicitudin ultrices. Morbi dignissim mauris varius orci placerat luctus. Aliquam et risus nulla, ac tincidunt augue. Integer interdum convallis nibh. Maecenas porttitor, leo at sollicitudin posuere, neque sem porttitor odio, et convallis sem massa id dolor. Sed molestie justo id lacus luctus vel commodo sem vestibulum. Vestibulum venenatis risus quis dui consectetur tempus. Suspendisse ultricies turpis sed odio laoreet imperdiet vel ac erat. In mattis enim ut orci tincidunt condimentum.</p>
</div>
<div class="col-sm-6">
<p>Phasellus non nibh ante, a elementum quam. Donec cursus fringilla dui et iaculis. Aenean rhoncus erat accumsan orci mollis vitae malesuada metus euismod. Maecenas aliquet leo et dui venenatis pulvinar sit amet id mi. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed non purus sed tellus vehicula ullamcorper. Curabitur in nisi faucibus enim rutrum placerat. Morbi consequat, magna at convallis tempus, eros ante cursus neque, quis fermentum quam est a est. Mauris non arcu nulla. Vestibulum facilisis pulvinar augue nec egestas. In iaculis diam in libero facilisis eu rutrum arcu elementum. Phasellus viverra porttitor interdum. Donec vel elit vel erat sollicitudin sollicitudin.</p>
<p>Ut libero turpis, tristique et euismod quis, feugiat eu lorem. Nulla facilisi. Donec nec cursus nisi. Suspendisse tempor egestas rutrum. Pellentesque ac velit eget nisl auctor tincidunt in at risus. Integer lacinia neque vel lacus ornare vehicula suscipit dolor suscipit. Nunc quam diam, consequat pulvinar egestas quis, congue ac lacus. Donec pharetra posuere lacus, eget egestas est feugiat vel. Fusce orci dolor, lacinia aliquet mollis quis, posuere at diam. Curabitur non nibh massa, elementum luctus dolor. Curabitur elit odio, elementum sit amet placerat ac, ullamcorper sagittis purus. Integer odio dolor, scelerisque nec hendrerit non, laoreet ut augue. Curabitur lorem odio, lacinia ac placerat in, condimentum a velit.</p>
</div>
</div>
</div>
</div>
</div>
</div>
```

(Bootstrap2)

By adding multiple `<container>`

elements, you can mix-and-match full width and pseudo-fixed-width sections, up and down the page.

Of course, in any container, you can also throw in some completely custom HTML – you don’t need to follow the grid system at all if that doesn’t suit.

]]>In this section, we’re going to look inside these kinds of statements, and focus on the bit that is just about truth and falsity. As usual, the approach will be to learn how to refine a statement from regular language into a rigorous, precise and repeatable form that is useful to a computer, but captures the real-world intent. Ultimately these kinds of statements underpin all of the decision making processes in our programs.

The notion of truth and falsity has two important characteristics for digital computers.

First, it is *definite* – there is no room for doubt or ambiguity. A statement is either true, or it is false. It can’t be a bit true. Or sort-of-false, if you look at it in a certain way. All of those grey areas in real life go out of the window, and you are left with TRUE and FALSE.

Second, since it considers only two values – TRUE and FALSE – this lends itself to being represented by a transistor in its ON and OFF states. By convention, TRUE is conventionally represented by ON (a bit with a 1 in it) while FALSE is represented by OFF (a bit with a 0 in it). But we’re skipping ahead a little.

In regular English language, **propositions** are usually bound up with **conditionals** of some kind, and are frequently used in **combination** with one another.

“If **you’ve cleaned your bedroom** **and** **you’ve done the washing up**, then you can go out and play.”

“If **virtuacorp shares reach 1.30** **or** **they drop below 0.9**, then we’ll sell.”

The **conditional** part of both of these sentences is the *if – then* wrapper around the **proposition**.

Between the *if* and *then*, each of the sentences above actually contains two **propositions**, which I’ve highlighted in **green** and **blue** to distinguish them from each other.

In each case, the two propositions are combined by an **operator** (highlighted in **purple**). In the first case this operator is the word **and**; in the second case it is the word **or**.

Our understanding of ordinary language tells us what these operators do: the first (**and**) means that the whole proposition is true if and only if both propositions are true. The second (**or**) means that the whole proposition is true if either proposition is true.

You’ll notice that each operator has the effect of combining the two propositions on either side of it into a single result. (We call these **binary operators** because they have two **operands**. You may remember that we’ve seen the word operand before!) We’re very familiar with this kind of operator in everyday maths.

The add operator in regular maths is a binary operator; it takes the expressions on either side of it, and combines them to form a result.

The multiplication operator is another binary operator; it also takes the expressions on either side of it, and combines them to form a result.

**Spot test:** Can you name two other binary operators you’re very familiar with?

**Answer:** There are several you could pick – the most obvious are probably subtraction and division.

If you said negation – then yes, that is an operator, but it is not a binary operator. It operates on only one value: the value that is to be negated, so it is called a **unary operator**.

You’ll also notice the multiplication and addition operators appear *between* their operands, so we often call them **infix operators**, whereas the negation operator appears *before* its operand, so we call it a **prefix operator**.

So, we can recognize binary and unary operators in regular maths. What about our **logical** operators, **and** and **or**? Can we write those natural language expressions in a more formal way, too?

Let’s call the two propositions in the first statement (“you have cleaned your bedroom”) and (“you have done the washing up”), and the overall proposition .

We can then write the statement “you’ve cleaned your bedroom and you’ve done the washing up” like this

We’ve just added to the list of weird symbols we will one day take for granted. We don’t bat an eyelid at for ‘plus’ or for ‘multiplied by’ (or, indeed ‘p’ for “a curious plosive noise we make by violently forcing air through our tightly closed lips as we open our mouth”). This new one is and it means ‘and’.

This expression, then, just means that if is true, and is true, then is true, otherwise is false.

Similarly, if is (“virtuacorp shares reach 1.30″) and is (“virtuacorp shares drop below 0.9″), then we can write the statement “virtuaCorp shares reach 1.30 or they drop below 0.9″ as

This means that If is true, or is true, then is true, otherwise is false, and the symbol represents ‘or’.

As well as the binary operators ** and** and ** or**, there is a logical unary operator called **not** which is a part of the family. It has the effect of making a true proposition false and a false proposition true.

If is (“virtuacorp shares have reached 1.30″) then the statement (“virtuacorp shares have not reached 1.30″) can be represented by

Here are the two families of operators we’ve seen so far – the familiar ones from everyday maths, and their logical equivalents.

MULTIPLY | ADD | NEGATE |

AND | OR | NOT |

One way to express what they do is to draw up **truth tables** for the operators. Given the truth value of each of two propositions, it shows the result of applying a given operator to them.

Here’s the truth table for the operator (AND). We write the two operands for the operator in the first two columns, and the result in the 3rd column.

False | False | False |

False | True | False |

True | False | False |

True | True | True |

**Spot test:** Can you write out the truth table for the operator? Remember that the result is true if either proposition is true. Give it a go before you look at the answer below.

**Answer:** Here’s the truth table for the operator (OR).

False | False | False |

False | True | True |

True | False | True |

True | True | True |

**Spot test:** what about the truth table for the operator? Remember that it is a unary operator, so it only has one operand. Again, give it a go before you look at the answer below.

**Answer:** Here’s the truth table for the operator (NOT).

False | True |

True | False |

As we mentioned earlier, computers aren’t great with complex notions like truth and falsity.

However, they are good with the numbers 0 and 1.

If we use 1 to represent true, and 0 to represent false, we can write our propositions in a way that makes them easier for a computer to understand.

So, if, for example and , then it follows that

We call this Boolean algebra, named for George Boole, who was a 19th Century British mathematician.

We can write out truth tables for these Boolean operators in exactly the same way as we could for our propositions earlier.

**Spot test:** for which operator is this the truth table?

0 | 0 | 0 |

0 | 1 | 0 |

1 | 0 | 0 |

1 | 1 | 1 |

**Answer:** (AND)

It turns out that the set of values and the three operators are all we need to construct any Boolean expression we care to think of.

What happens if we try to compose several binary operators into a more complex expression?

Let’s start out (as usual) by looking at this in the familiar world of every day maths

That just means add x and y then add z to get the result. But what about this?

Do we multiply x by y, then add z, or multiply x by the result of adding y and z?

We sort this out by applying a convention called operator precedence. By convention, negation takes precedence over multiplication and division, which take precedence over addition and subtraction. So, we would understand the previous equation to mean:

This shows us another good (often better!) way of sorting out what we mean when we write down an expression full of operators. We use parentheses (this just means “round brackets” – and is the plural of parenthesis) to indicate which parts of the sum we should calculate as a unit. This can avoid a lot of confusion and helps the reader a lot. It is quite clear that is not the same as

Logical operators compose in exactly the same way. NOT takes precedence over AND, which takes precedence over OR.

So

The three logical operators we’ve already seen are all you need to construct a Boolean algebra. However, some special combinations of operators are so useful that we give them names.

Let’s think about the light in a stairwell for a minute. It has two switches, one at the top, and one at the bottom. Both switches are off, and so the light is off. I want to go upstairs to bed, so I flip the switch at the bottom to “on”, and the light comes on. I trudge up the stairs, trying not to spill my bedtime cocoa (or gin and tonic or whatever), and blearily flip the switch at the top to “on”. The light goes off. In the cold, winter’s morning, I awake, bright eyed and ready for a day’s work, and leap out of bed to head downstairs for a cup of tea (or gin and tonic or whatever). Being a winter’s morning, it is still dark, so I flip the switch at the top of the stairs to “off”. The light comes on. I bound downstairs two at a time, flip the switch at the bottom to “off” and the light goes off.

We will not look any further into the grim details of my day, but draw out a table describing the states of the switches and the lights throughout that story.

Switch 1 | Switch 2 | Light |

Off | Off | Off |

On | Off | On |

On | On | Off |

Off | On | On |

(Off) | (Off) | (Off) |

This looks an awful lot like a Boolean truth table

x | y | z |

0 | 0 | 0 |

1 | 0 | 1 |

1 | 1 | 0 |

0 | 1 | 1 |

If both operands are different, then the result is 1. If they are the same, the result is 0. We call this exclusive OR, or sometimes XOR, and it is denoted by the symbol .

However useful this may be (we’ve already seen a real, practical example of its use in the light switches) – it is not one of our primitive operators. Instead, you can construct it as a combination of the operators AND, OR and NOT.

Let’s try an exercise now.

**Exercise: Write a Boolean expression using AND, OR and NOT that is equivalent to XOR.**

To help you work out the answer to the exercise we’re about to try, we can use the interactive programming environment that we installed in our setting up section to experiment with logic.

The setup instructions for this F# environment are here.

Start it up by opening a command prompt / console and typing `fsi`

(Windows) `fsharpi`

(Linux/Mac)

F# understands about Boolean values and operators. It calls the values `true`

and `false`

, and the AND operator is `&&`

, the OR operator is `||`

(you’ll probably find this pipe character near the left shift key, or near the return key on the right of the keyboard, but your keyboard layout may vary) and the NOT operator is the word `not`

.

Let’s give this a try. First, we could try

Type

`not false;;`

and press return.

(Remember that `;;`

at the end of the line tells the F# interpreter that we’re done with our input and it should execute what we’ve typed.)

F# responds with the following:

`val it : bool = true`

You can read that as: “the resulting value (‘it’) is a bool and that value is true”. Which is exactly what we’d expect from this .

Let’s try a binary operator – AND, say. What’s the result of ? In F#, the AND operator is represented by `&&`

, so we can type:

`true && false;;`

F# responds:

`val it : bool = false`

which we read as “the resulting value (‘it’) is a bool and that value is false”

**Spot test:** What about OR. What’s the result of ? OR is represented by `||`

in F#. So what will you type, and what will the F# runtime’s output be? As usual, work it out, try it in F#, then check the answer here.

**Answer:**

`true || false;;`

`val it : bool = true`

which we read as “the resulting value (‘it’) is a bool and that value is true”.

Do try this out before looking at the answer. It might take you some time, but don’t worry – work it through carefully, and you’ll get there in the end.

You’ll be doing a lot of this in real, day-to-day programming jobs wherever you are in the stack, and you want to train your brain to think in this way.

One problem with most modern programming is that we have to do an awful lot of donkey work setting up the environment we’re working in, preparing data to match the requirements of some 3rd party code we have to integrate with, or dealing with the operating system, programming language and runtime (more on those later), laying out forms or rendering the visuals our User Experience and Design team have lovingly prepared for us. So much so that for many programmers, this seems to be all of the job.

Faced with the need to get some visual to animate in to a web page, they read a book, or search for a blog , and find some code that (pretty much) does the job, with nice step-by-step instructions on how to get it into their application. They bookmark the page, and that tool or code sample becomes part of their development armoury. When they see a problem like that one, they reach for that tool. They’re often productive and they get the job done. (Not inventing everything from scratch is a really important part of programming – we’re always building on other people’s skills and experience.)

However, they often don’t really understand why that code did the job for them, or what the constraints were, or under what circumstances it might fail.

And when faced with a knotty piece of business-driven logic like the examples we’ve seen (even something as simple as a pair of light switches – and real business logic is often much more complicated than this) they don’t have the discipline, experience or tools to analyse it to a sufficient level of detail even to get the basic logic right – let alone think about the edge cases. And that’s one of the primary sources of bugs in our systems.

We all get sucked into this way of working from time to time – pressure to deliver often leads us to take supposed short-cuts, and hack our way through the problem to some kind of working solution. It is very tempting to take a working example from the web and hammer it in to our application, without taking the time to go back (at some point) and really understand what it does. But that approach usually comes back to bite us later on.

This whole course is about diving into the craft of programming and starting the long journey to really understand (at some level) what we do when we write programs. That’s why I’m encouraging you to do the exercises, and not just read the question, and then the answer, and move on.

Also, the people who really understand this stuff, and come in and analyse the mess other people have made of their logic, or advise people how not to get in a mess in the first place, get paid a heck of a lot more than the people doing the day-to-day grind, and (probably) have much more fun to boot. So there are incentives for this investment of your time and brain-power!

OK, you can get back to working on the exercise, now that you’re armed with something that lets you quickly test your efforts.

**Exercise: Write a Boolean expression using AND, OR and NOT that is equivalent to XOR.**

**Answer:**

Probably the easiest way to approach this problem is through the truth table.

Let’s remind ourselves of the truth table for XOR.

0 | 0 | 0 |

1 | 0 | 1 |

0 | 1 | 1 |

1 | 1 | 0 |

Now, if we discard the rows that produce a false result:

1 | 0 | 1 |

0 | 1 | 1 |

Looking at the table above, we build terms for each row by ANDing together the operands. If there is a 1 in the column, we just take the operand itself. Otherwise we take its complement.

So, in this case our two terms are:

term | ||

1 | 0 | |

0 | 1 |

We now OR those terms together to produce the result.

This is sometimes called a **sum of products** approach (thinking about the relationship of OR with addition, and AND with multiplication).

Another way of doing it would be the **product of sums** approach. In this case, we only look at the false columns in the truth table.

1 | 1 | 0 |

0 | 0 | 0 |

As the name implies, this time we build rows by ORing together the operands.

term | ||

1 | 1 | |

0 | 0 |

Then we AND together those terms to produce the result.

If you didn’t know about the truth-table technique, we could also look at the expression in words “it is true if (x or y) is true, but not if (x and y) is true

This leads us to yet another expression:

So many possible expressions for the same result! In a later section, we’re going to look at the rules of Boolean algebra that let us transform from one to another, and find a suitably compact form.

Let’s pick one and prove to ourselves that it works by trying a simple example for in F#.

This can be expressed as:

`(true || false) && not (true && false);;`

That produces the result:

`val it : bool = true`

So far, so good, but we really need to produce the whole truth table. It will be a bit boring typing that whole expression out every time, so in the next section, we’ll learn how to use F# to ease the pain for us.

Learning To Program – A Beginners Guide – Part Two – Setting Up

Learning To Program – A Beginners Guide – Part Three – What is a computer?

Learning To Program – A Beginners Guide – Part Four – A simple model of a computer

Learning To Program – A Beginners Guide – Part Five – Running a program

Learning To Program – A Beginners Guide – Part Six – A First Look at Algorithms

Learning To Program – A Beginners Guide – Part Seven – Representing Numbers

Learning To Program – A Beginners Guide – Part Eight – Working With Logic

Learning To Program – A Beginners Guide – Part Nine – Introducing Functions

Learning To Program – A Beginners Guide – Part Ten – Getting Started With Operators in F#

Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions

Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks – Recursion in F#