How I learned that marketing a website (even if it is really useful) is surprisingly hard

It’s now about two weeks since the HMRC’s initiative to get the UK to go out for a meal has finished. In an earlier post I showed how to make a map-based tool to search all Eat Out to Help Out (EOTHO) establishments in very little time.

What follows is the story of what happened after the site launched, and how I failed to make a dent at marketing.

To recap:

  • HMRC published a data-set of restaurants participating in EOTHO on GitHub
  • I fed that data into a set of shell and AWK scripts to generate a searchable map
  • The static site is hosted on GitHub

I was quite pleased with the result and wanted to shout about it.

Here’s what I did:

Slack (Success)

I sent out messages on the Equal Experts, HMRC Digital and X-Gov Slack instances and got some really nice feedback.

Blog about it (Success-ish)

We turned my original blog post into an Equal Experts blog-post which I appreciated

LinkedIn posts (Success-ish)

I took to LinkedIn and added a few posts, my most viewed post just pointed back to my blog, had nearly 4000 views and 60 reactions. Not bad considering I usually get hundreds of views rather than thousands, but I’m not exactly influencing anybody. Not that I particularly want to either…

Open Source (Success)

I very early decided I would make the website open-source, and I was mainly thinking that it would be a good way of promoting the site, but was very pleasantly surprised when not long after launch, Issues and Pull Requests started appearing. I started collaborating with Scott Dutton (who I’d never met before) – and he was a great help bringing improvements and ideas.

Learn something iOS (Success)

After my descriptions of my approach, I got together with Paul Stringer, a colleague from Equal Experts – again never met IRL – such are these pandemic times – but we got chatting about how this could be turned into an app. So we did (well, Paul did most of the app – I just contributed a couple of server side bits). Found it quite exciting to have an App on the Appstore. For more info on the app, read Paul’s blogpost

Learn something Android (Fail-ish)

After the success with getting an iPhone app out of the door (about two weeks into EOTHO), I was talking with my colleague Chris Sawczuk about doing an Android app. And while it was a bit lagging behind (and it took me probably too much time getting the Google accounts sorted), we had one ready to go – it just never made it out of the door as we never quite got the internal test version released before Eat Out to Help Out actually finished.

Interlude on numbers

What kind of traffic was I getting?statistics

The initial numbers looked great – I was getting thousands of users, without more than just word of mouth. However, I quickly discovered that most of my traffic wasn’t direct traffic but rather because first and then had embedded my map in an iframe in their site (which I was quite happy about, first as I released the code as open source and secondly and especially as ran a bit of background based on my initial blog post – great!)

So I was quite encouraged and thought – let’s put a bit of effort into marketing and see where it goes.

Submit to HackerNews (Fail)

I submitted my website to Hacker News which has a section on “Show HN” where submitters can show “stuff”. I never got a single upvote. Probably I only just created the account and have no “karma”

Tweet about it (Fail)

I sent out tweets to Martin Lewis, tech journalists, my local MP, influencers that tweeted about Eat Out to Help Out, replied to announcements about Eat Out to Help Out. All in all, it was quite the failure. Probably because I created my twitter account 10 years ago, and never really used it. I managed to double my followers – now I have the grand total of 9 followers and while it is good to find some really useful nuggets, there’s just so much noise to wade through.

Facebook (Fail)

My Facebook profile is about as unused as my Twitter one. So it wasn’t all that surprising that sharing some news about my new website came across as a bit funny. My wife was more successful in spreading the news about my website.

Reddit (Fail)

Reddit was a funny one. I suffered from just the same problem as Hacker News, I created the account, shared something and nobody wanted to know. It was even funnier when my post was deleted as duplicate! I questioned it with a moderator and the reply was as swift as it was crushing:

“Generally (with a certain level of exception), we don’t advertise apps/sites/projects/etc. The flairing was incorrect though. That was my fault. It should have been marked as ‘spam’.”

Fair enough, I am coming to the realisation that unless getting really lucky, going viral on “the Interweb” can only be done if you have lots of followers/karma/points or know someone with lots of followers/karma/points.

Notifying the press (Epic Fail)

My last thought was to send out emails to national and local newspapers. My thinking was that local papers might be interested in a story along the lines of “Local man writes software to find Eat Out to Help Out restaurants” and for the national newspapers, technology or restaurant trade publications it would have been interesting to show an alternative “Search Eat Out to Help Out” website, which complementing the HMRC tool, IMHO offered some advantages.

But nobody replied.

Well, in what I regard as somewhat of an achievement, the Editor of replied with

“On second thoughts, I won’t waste your time. I don’t think it is something we’d cover.Thanks for the offer and best of luck with it.”

This little project has taught me a lot of technical and non-technical lessons, was a lot of fun and who knows what can be done with the underlying tech in future.

My main takeaway is: Marketing is hard! It’s not about what you know, it’s how many followers you have 😉

Last week, Associate Gerald Benischke posted about his Eat Out To Help Out discount dining finder – a map tool he created to help the public easily find local restaurants participating in the scheme. 

I loved how each of the problems he encountered were solved using simple scripts and old fashioned unix utilities that reduced solutions to the minimum needed.

The creation of this tool led to internal discussions between myself and Gerald about how to turn this good idea into an App, and how best to go about promoting it.  I suggested​ he could try some different routes to get an App done following the usual quick wins: 1) Wrap it all in a webview and be done. 2) Re-do with native components but use one of the Web-based development frameworks like ReactNative or maybe Cordova/PhoneGap, or 3) Publish some APIs until a willing participant comes along to take on the build of a native App.

That willing participant turned out to be me!  What I really wanted Gerald to do was build this as a fully native iOS App using native maps and UI.  I thought this would give it the best app experience and make it a worthwhile addition to the App Store, not just a tool to help market a website, (personally I feel cheated when that happens because I think we all go to the App Store for Apps. If we wanted a website, well that’s what browsers are for).

So that’s what we did and to follow, (in the house style of Gerald’s first post), is how we did it. I think it makes for an interesting contrast to the different concerns that arise between native mobile development and web development.


We started with a list of participating restaurants, and their latitude and longitudes, so that we could place them on a native map. Initially, I expected to re-implement the partition, lookup approach that had been used for the site. I could have lifted the Javascript code for this and then used iOS’s native JavaScriptCore engine to run the Javascript within the App (without a browser).  Gerald already had found a way to make my life much easier. After checking out Getting Started with MapKit, and spotting the native support for GeoJSON formatted files, he found another tool, csv2geojson.  A quick NPM install later, he had a tidy 1Mb file in GeoJSON format containing all 55,000 restaurants.

npm install -g csv2geojson

cat <(echo name,postcode,lat,lon) <(awk -F, -f reduce_precision.awk target/named_pubs.csv) | csv2geojson --lat lat --lon lon | jq -c . | gzip -c > target/test.json.

This meant we could download all the data we would need very quickly for devices in a format ready to parse and annotate a map with.  There’s a postcode lookup on the site which makes for the central part of the UI. In the spirit of “You Aren’t Going To Need It” (YAGNI), and deferring everything until we really needed it, I decided to leave this for now. Instead I focused on just getting the annotations on the map, and checking how well or bad this would perform with 55,000 items.


The site uses LeafletJS for maps.  On iOS we decided to simply opt for the native Apple Maps via MapKit, which came out of the box and behaves consistently with other Apps, gives the best user experience, performance and developer APIs (there’s also a little known web version available which is on par with the native version MapKitJS).

We then discovered that getting a map on screen is easy, but setting the map position up correctly takes a little bit of learning around the APIs – finding the right coordinates to centre the map to the UK, for example, (turns out the middle of the UK is somewhere in Morecambe Bay).  Figuring this out took some trial and error, not being familiar with the maths behind it all. 

extension CLLocationCoordinate2D {

    // Somewhere in Morecambe Bay

    static let UK = CLLocationCoordinate2D(latitude: 54.093409, longitude: -2.89479)


extension MKCoordinateSpan {

    static let HIGH = MKCoordinateSpan(latitudeDelta: 14.83, longitudeDelta: 12.22)

    static let MIDDLE = MKCoordinateSpan(latitudeDelta: 0.025, longitudeDelta: 0.025)

    static let LOW = MKCoordinateSpan(latitudeDelta: 0.005, longitudeDelta: 0.005)


extension MKCoordinateRegion {

    static let UK = MKCoordinateRegion(center: CLLocationCoordinate2D.UK, span: MKCoordinateSpan.HIGH)


extension CLLocationDistance {

    static let UKZoomMin = CLLocationDistance(exactly: 0.5 * 1000)!

    static let UKZoomMax = CLLocationDistance(exactly: 2200 * 1000)!


We also discovered that as the scheme only applies to the UK, the maps was locked to the UK, meaning it’s not possible to pan away, or when using location, to show somewhere else if you happen to be outside the UK.  This caused us a little fun later at App Store review time.

static func constrainMapBoundariesToUnitedKingdom(_ map: MKMapView) {

   map.cameraBoundary = MKMapView.CameraBoundary(coordinateRegion: MKCoordinateRegion.UK)

   map.cameraZoomRange = MKMapView.CameraZoomRange(minCenterCoordinateDistance:  CLLocationDistance.UKZoomMin, maxCenterCoordinateDistance: CLLocationDistance.UKZoomMax)



Now we had to ensure that the map had something on it.  After some run-of-the-mill implementation of file downloading with NSURLSession, and correctly observing server cache policies to avoid unnecessary bandwidth usage on the GitHub site, we had the GeoJSON file (transported as a gzip file to further save bandwidth). We parsed that into an in-memory store of native Swift objects giving us our 55,000 objects and locations. To see how things performed, I threw the entire lot onto the native map and watched what happened:


It now looks promising, and MapKit does a good job of clustering nearby annotations.  But performance was very poor and was resulting in jagged, unresponsive panning and zooming – because of dynamically re-calculating clusters from the 55,000 locations it was being asked to track in real-time.

There are probably all kinds of ways to get smarter about what annotations to plot on the map, resulting in more intelligent clustering.  However our lack of time was a bit of a factor, so finding some quick ways to cheat was the “Plat du Jour.”  I decided to use the fact that it’s not actually helpful to see every single restaurant on a map when you’re really after those that are nearby.

So the quick solution was to begin by limiting annotations so that the user more quickly gets to a street level of detail.  Using a quick check of whether a location fitted within the current map’s view, we were able to limit the number of annotations on the map at any one time, although not enough, as the annotations on the periphery seemed less useful. I think Gerald’s approach of using a 5 mile radius from the centre point instead could be used here.

Pins and map


This limited the number of annotations on the map at any one time, which really helped – however it still will begin to grind at the limits of dealing with density of locations in places like central London.

The implementation so far still involved a brute force enumeration of 55,000 records after every pan & zoom of the map which you can see in the code below. Sounds SLOW, but these are finely tuned super-computers so it turned out that our concerns were unwarranted.  Computation performance was no problem in this regard, even on a stately iPhone SE.  (Rendering demands also scale down naturally due to the decrease in screen size on smaller devices, meaning performance concerns were also unwarranted.)

Filter 55k records…on a PHONE!  Ha-ha… Oh wait a minute! 😳

let annotations = items.filter {

   return mapViewRect.contains(MKMapPoint(restaurant.coordinate))


Optimising this part of the code with a partitioned lookup approach would have been premature and involving the possibly larger performance bottleneck of downloading a large number of much smaller files which would have been very costly in network and battery performance. 

On mobile, it’s known to be more power efficient to download few bigger files, than perform many network requests for smaller files. Having all the data on device and in-memory meant calculations could be very fast without incurring network lookups.


We now had a map where performance was ok.  Whilst there was some room for improvement, “time to App Store,” rather than ideal implementations, was the critical priority here, so we left the map as is.

Earlier we had deferred the postcode search to give users a quick way to find locations nearby.  With the map in place, it was time again to think about whether this was needed now or not.

Postcode Not Found

It turns out that on a phone with a multitouch interface, it’s faster and more natural to pinch and zoom your way to a place IMO than it is to peck out a postcode on the keyboard.

On the web, you don’t have the luxuries of pinch and zoom, but you do have a big keyboard and mouse at your fingertips, and a postcode look up makes sense.  Just one of the considerations where it makes sense for mobile UX and web UX to tailor themselves to the user inputs to hand. 

For an App I believe multitouch is by far the most efficient and preferred user input.  On the web, the default is to design for keyboard and mouse for the widest cross-platform support..

Even faster than pinching and zooming would be to use your actual location.  Perhaps not as natural to enable in a browser, but we tend to consider Apps to have better security and privacy models (whether that’s true or not).  So our assumption here was that using location is more than a “nice to have,” it’s something users would expect to have.

So next up was connecting to the user’s current location to determine where to show on the map.  Fortunately this was all quite easy to implement.  The iOS map already had inbuilt support for showing the user’s location and heading on a map.  It took another few lines of code to enable this, and centre the map onto their location.

func addSystemMapUserTrackingButton() {

   let userTrackingButton = MKUserTrackingButton(mapView: self.mapView)


   mapView.showsUserLocation = true


func zoomToUserLocation() {

   if let userLocationCoords = mapView.userLocation.location?.coordinate {

      let userMapPoint = MKMapPoint(userLocationCoords)

      let userLocationWithinBounds = mapView.cameraBoundary?.mapRect.contains(userMapPoint)

      if  userLocationWithinBounds {

         let region = MKCoordinateRegion(center: userLocationCoords, span: MKCoordinateSpan.MID)

         mapView.setRegion(region, animated: true)




The next trick though was to manage the permissions to that location, and keep the UI in sync accordingly.  Without permission explicitly granted to the App and implemented by the App, you don’t see the user location out of the box.

The case of the many cases

This took far longer to implement correctly than I expected.  It’s where my decision to forego the guidance of TDD started to burn me (in my defence I followed patterns which I knew could easily be tested at a later stage). UI and state in Apps are natural bedfellows, and state for an App changes for many different reasons.

You can be online / offline / foreground / background / force quit and restarted. A user can change location permissions behind your back when your App isn’t in the foreground and aware.  Location permissions may be granted only once, forever, or never. 

I never learned the lesson, but implementing a state machine for wherever you have any number of states and UI to synchronise would save a lot of time and trouble.

It didn’t work out so badly, but this below is just the tip of a number of case statements needed to get everything working, and I really don’t like touching this code now because it feels too fragile. Hence adding Unit Tests is next on the list.

switch status {

   case .on:



   case .off:


   case .undefined:


   case .initialising:



Palate Cleanser

Gerald​ created a nice little palate cleanser in his attempt at App development. He fired up Xcode to track an obscure crash only his iPhone was able to produce.  This led to perhaps a bigger palate cleanser than he was hoping for, not only for him, but his entire computer that eventually led to an OS X update to Catalina. Note to those who’ve not tasted App development previously – it comes with a dizzying and opinionated list of dependencies, which extends all the way to particular brands of computer, phones, IDEs and OS versions.


We finally reached a point where we could launch our App to the world.  So far development had used CI/CD using Buddybuild, to automatically code signs and upload builds to App Store Connect, which then publishes it to a team using Apple’s own TestFlight.  This let us test the App before it hit the stores.

Next step to shipping was submitting it to the App Store for review.  It’s always a little nail biting.  Was there something we’d not thought of that would lead our efforts to be wasted?

The App was submitted late on a Sunday evening.  Lo and behold, by Monday morning there was a response ominously titled Guideline 2.1 Information Needed.  Taking a gulp, I opened up the message and learned that all the App Review team needed was a video of the App working on a device.

Puzzled at first, it occured to me that the App is limited to use in the UK, which means that the App Review team in the U.S. somewhere are unable to see location working correctly given that it’s locked to the UK and could have seemed like a bug.  A quick video was made and uploaded to the “Resolution Center.”


Finally, after a tiny delay between courses, the App was approved and on Tuesday, 11th August 2020, released to the App Store. If you download it, you can find and enjoy a nice meal out at a local eatery.

A massive thanks to Gerald Benischke​ for coming up with the idea, sharing it, and doing most of the hard work already.  The App code is all open source and over on GitHub – your contributions are welcomed.

Download on the App store


Find out more about our Mobile Practice here.


This post describes how I developed the Discount Dining Finder, a lookup map tool for the Eat Out to Help Out scheme in my spare time. The aim of this post is to provide an insight into how problems of scaling services can be solved by having no servers and not using “serverless services” either.


A really nice side effect of working in a high functioning environment is that sometimes you’re involved in bouncing ideas off each other. The delivery teams at HMRC were working on releasing yet another service to the public in less time than it takes you to say “Agile”. This scheme was called Eat Out to Help Out.

The scheme would consist of different journeys:

  • registering a restaurant,
  • searching for registered establishments for the public and
  • making claims for payment.

Out of these three, the biggest unknown in terms of expected volume was the “search journey” to be used by the general public. In this journey, a user would enter a postcode, and registered establishments inside an X mile radius would be displayed. There was a large number of unknowns in terms of how much traffic was to be expected on the HMRC service.

  • Would there be big peaks at lunchtime or dinnertime?
  • What if Martin Lewis goes on TV, recommends visiting the site and the, two minutes later, 10% of the country wants to find out information about their local eateries?
  • Could it impact other HMRC services (the tax platform hosts a multitude of services)?

Now, the tax platform is a very scalable and robust platform and I am not for one minute suggesting that there was going to be a problem using microservices and geo-location in Mongo at scale, but one of the ideas that I floated centered around the fact that the information is fairly static. Sure enough, “eat out” businesses register their premises with HMRC, but once they are registered, the bulk of information will not change. Postcodes and distances between them are not that changeable. So that’s when I wondered, whether this could be delivered in a static site.


I went away and found that freemaptools provides me with a list of UK postcodes and their associated latitude/longitude. In that file, there are 1,767,875 postcodes. Searching almost 2 million records sounds like the job for a server and a database, doesn’t it? Erm, no.

Looking at the postcode file

$ head -10 ukpostcodes.csv 
1,AB10 1XG,57.144165160000000,-2.114847768000000
2,AB10 6RN,57.137879760000000,-2.121486688000000
3,AB10 7JB,57.124273770000000,-2.127189644000000
4,AB11 5QN,57.142701090000000,-2.093295000000000

Instead of searching a single ukpostcodes.csv (95 MB) every time, I decided to “shard” or “partition” my CSV file into smaller files:


Each file is split into directories by their first letters. So if I want to find out about postcode AB12 4TS, I’d split up the the outcode (AB12) into /A/B/AB12.csv. That file would only have 799 entries. Searching them manually is much more palatable.

So I’ve got my main page and the user would enter their postcode

Discounted Dining Finder

And I can search for the postcodes simply by using a bit of Javascript inside the user’s browser.

d3.csv("outcode/" + outcode[0] + "/" + outcode[1] + "/" + outcode + ".csv")
    .then(function(postcodes) {
        result = postcodes.find(d => normalisePostcode(d.postcode) === postcode);
        if (result) {
            mapid.panTo(new L.LatLng(, result.lon));
        } else {
  "#status").text("Postcode not found")

D3 is a great library for visualisations, but I also found it very useful for reading and processing CSVs in Javascript, and the files can be served up by a static web server.

Great! But how do I get my directory structure? I did not fancy manually copying and pasting the file. You think that surely now is time to unleash some NoSQL database or at least some Python. But no, I decided to keep it simple and use a combination of shell scripts and AWK:

awk -F, -f split_outcodes.awk target/ukpostcodes.csv

The split_outcodes.awk script did the hard work of creating new files in the correct directory.

$1 != "id" && $3 < 99.9 {
  split($2, f, " ");
  outcode = f[1]
  outcode1 = substr(outcode, 1, 1)
  outcode2 = substr(outcode, 2, 1)
  file="target/outcode/" outcode1 "/" outcode2 "/" outcode ".csv";
  if (prev!=file) close(prev);
  if (headers[file] != "done") {
    print "id,postcode,lat,lon" >> file;
    headers[file] = "done"
  print $0 >> file;

This resulted in 2,980 files. The biggest of those was 145KB which corresponded to 2,701 postcodes. Now that’s much better than looking up 1.7 million postcodes for every search!


I didn’t mention the Discounted Dining Finder had a map. A quick overview on setting that up!

I used LeafletJS – an open source map. Here’s how:

mapid ='mapid');
L.tileLayer('https://{s}{z}/{x}/{y}.png', {
    maxZoom: 19,
    attribution: '&copy; <a href="">OpenStreetMap</a> contributors'
markerLayer = L.layerGroup().addTo(mapid)

And I had a map!



That map didn’t have anything on it yet! I was able to convert a postcode into lat/lon though. The next step was to look up the restaurants. I decided to keep running with the idea of doing all my computations on the user browser (desktop or phone).

First of all, I found that the UK postcodes were covering an area of:

$ cut -f3 -d, ukpostcodes.csv | awk -F, 'BEGIN { max = -999; min = +999; } /[0-9.-]+/ { if ($1 > max) max = $1; if ($1 < min) min = $1; } END { print min, max; }'
49.181941000000000 60.800793046799900
$ cut -f4 -d, ukpostcodes.csv | awk -F, 'BEGIN { max = -999; min = +999; } /[0-9.-]+/ { if ($1 > max) max = $1; if ($1 < min) min = $1; } END { print min, max; }'
-8.163139000000000 1.760443184261870

I calculated that the rectangle (60.80 N/-8.16 W) – (49.18 N/1.76 E) covered about 400 miles from west to east and 800 miles from north to south. My aim was to provide a lookup that could find all restaurants in a 5-mile radius, so I split my search area up into tiles of roughly 5×5 miles. Here’s my translation function:

var x = parseInt(( - 49.0) / (12.0 / 160.0))
var y = parseInt((+result.lon + 9) / (11.0 / 80.0))

That would give me a coordinate set for a tile. So the Buckingham Palace (51.5 N/-0.14 W) would be at coordinates (33/64). Based on that, I could build another set of files:


Whereby all the eateries that are in coordinates (33/64) would be in the file pubgrid/33/33-64.csv. That file would look like this:

blue racer and frilled lizard,BR1 1AB,51.406270892812800,0.015176762143898
saltwater crocodile and blue racer,BR1 1LU,51.401706890000000,0.017463449000000
king cobra and Schneider python,BR1 1PQ,51.406421920000000,0.012595296000000

The javascript can then find the suitable restaurants like so:

d3.csv("pubgrid/" + x + "/" + x + "-" + y + ".csv")
    .then(function(pubs) {
        let inRange = pubs
            .map(a => ({ ...a, distance: distance(result, a)}))
            .filter(a => a.distance < (5 * 1609.34))
            .sort((a, b) => a.distance - b.distance)
            .slice(0, 250)"#results").selectAll("tr")
            .data(d => [, d.postcode, (d.distance / 1609.34).toFixed(2) + " miles away" ])
            .text(d => d)

        inRange.forEach(d => L.marker([, d.lon], { "title": }).addTo(markerLayer))

The above code does a few things:

  1. It calculates the distance between the selected lat/lon and the lat/lon for the restaurant.
  2. It filters out anything that is further away than 5 miles.
  3. It sorts by distance, so that the closest are first.
  4. It takes up to 250 results.
  5. It can dynamically create a table that shows the results (this is very neat using D3)
  6. It can clear and recreate all the markers on the map.

The end result looks a little like this:

Map with markers


Now, the next tricky bit is to ensure, that my coordinate grid system, which simplifies into coordinates (lat/lon), contains all the relevant information about the closest eating establishments. Each tile is designed to be about 5×5 miles. In order to ensure that we find every restaurant that is 5 miles away from each tile, each restaurant goes into the tile it is in, as well as the surrounding tiles. This is done using trusty AWK:

function print_to_file(file) {
  if (headers[file] != "done") {
    print "name,postcode,lat,lon" >> file;
    headers[file] = "done"
  print $0 >> file;

  x = int(($3 - 49.0) / (12.0 / 160.0))
  y = int(($4 + 9) / (11.0 / 80.0))

  file_tl="target/pubgrid/" (x-1) "/" (x-1) "-" (y-1) ".csv";
  file_tm="target/pubgrid/" x "/" x "-" (y-1) ".csv";
  file_tr="target/pubgrid/" (x+1) "/" (x+1) "-" (y-1) ".csv";
  file_ml="target/pubgrid/" (x-1) "/" (x-1) "-" y ".csv";
  file_mm="target/pubgrid/" x "/" x "-" y ".csv";
  file_mr="target/pubgrid/" (x+1) "/" (x+1) "-" y ".csv";
  file_bl="target/pubgrid/" (x-1) "/" (x-1) "-" (y+1) ".csv";
  file_bm="target/pubgrid/" x "/" x "-" (y+1) ".csv";
  file_br="target/pubgrid/" (x+1) "/" (x+1) "-" (y+1) ".csv";


But wait a minute, that presupposes that I have a list of pubs and their coordinates. That’s not the case; all we’ve got is the establishment name and their postcode. Thankfully there’s a shell command that I can use to join my existing postcode file and a file of establishments and their postcodes:

join -t , -1 2 -2 2 -o 1.1,0,2.3,2.4 \
   <(sort -k 2 -t , target/pub_postcodes.csv) \
   <(sort -k 2 -t , target/ukpostcodes.csv) > target/named_pubs.csv

The above does the following

  • sorts both the pub_postcode.csv (containing name and postcode),
  • sorts the ukpostcodes.csv (containing the postcode and lat/lon) and
  • joins the two files, creating one file in which the lines are joined by the postcode.

Palate Cleanser

You will have noticed above that my examples aren’t using real pub or restaurant names. At the time of writing HMRC had not yet published the list of registered restaurants, so I used my shell scripting knowledge (and a lot of googling) to create a fairly neat way of generating random pub/restaurant names.

I took a list of animal names and randomly combined them with “and”, the aim being to get the “Fox and Badger” and endless variations.

Here’s the shell script to allow you to do this:

shuf -n 100000 target/ukpostcodes.csv | cut -f2 -d, > target/pub_postcodes.txt

shuf -rn 100000 animal_names.txt > target/1.txt
shuf -rn 100000 animal_names.txt > target/2.txt
yes "and" 2>/dev/null | head -100000 > target/and.txt
paste -d " " target/1.txt target/and.txt target/2.txt > target/pubnames.txt

paste -d "," target/pubnames.txt target/pub_postcodes.txt > target/pub_postcodes.csv

This accomplishes the following:

  • picks 100,000 random postcodes,
  • creates 100,000 random animal names,
  • creates another 100,000 random animal names (in a different order)
  • creates 100,000 instances of “and”,
  • and combines them all, resulting in my randomly generated pub names.
$ head pub_postcodes.csv 
leguaan and bushmaster,B79 7SP
anaconda and Moluccan boobook,CM20 2GN
flying lizard and hoop snake,NW4 3LY
Towhee and agamid,LL11 6NN
Puffleg and Gila monster,OX12 0FE
mamba and Chipmunk,UB6 7AH
Eagle and Marsh harrier,FK1 5LE
Jay and chameleon,KA19 7NW
B and Maya,L5 7UB
ringhals and Diving bird,W9 2EH


All of the above is very good, but I’ve still not hosted my tool anywhere, and I don’t want to use my own servers. Thankfully, provides GitHub Pages and GitHub Actions which can be combined to provide a build pipeline and a hosting solution!


Thanks for reading, I hope you found the Discounted Dining Finder and the above tale interesting. The source code is available on and released using the Apache-2.0 open source licence.