georgepowell.info

George’s dumping ground/blog/website etc.

Ridiculously succinct array manipulation in javascript

Introducing z.js – a lightweight javascript library that allows you to use a ridiculously succinct syntax for working with javascript arrays, like [1, 2, 3].map('a * 2').filter('a % 5 == 0');. They’re like lambdas in C# but better.

Here’s the code in full:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
// z() takes a string-lambda and returns a function representing that lambda
var z = function (lambda) {
  if (!(typeof lambda == 'string' || lambda instanceof String))
      return lambda; // if z() is passed a function, it is returned unmodified

  var parts = lambda.split(" => ");
  var inputs;
  var code;

  if (parts.length === 2) {
      inputs = parts[0].split(" ");
      code = parts[1];
  }
  else {
      inputs = "abcdef"; // Implicit parameter names a-f
      code = parts[0];
  }

  code = ' ' + code + ' ';

  return function () {
      var $$ = [];
      var expression = code;
      for (var i = 0; i < inputs.length || i < arguments.length; i++) {
          $$[inputs[i]] = arguments[i];
          // Only finds variables enclosed by spaces or brackets... TODO: proper finding and replacing of variable names.
          expression = expression.split(' ' + inputs[i] + ' ').join("$$['" + inputs[i] + "']");
          expression = expression.split('(' + inputs[i] + ')').join("($$['" + inputs[i] + "'])");
      }
      return eval(expression);
  }
}

And to make the standard Array.prototype methods support the new string-lambdas:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Array.prototype.forEachF = Array.prototype.forEach;
Array.prototype.forEach = function (lambda) { return this.forEachF(z(lambda)); }

Array.prototype.everyF = Array.prototype.every;
Array.prototype.every = function (lambda) { return this.everyF(z(lambda)); }

Array.prototype.someF = Array.prototype.some;
Array.prototype.some = function (lambda) { return this.someF(z(lambda)); }

Array.prototype.filterF = Array.prototype.filter;
Array.prototype.filter = function (lambda) { return this.filterF(z(lambda)); }

Array.prototype.findF = Array.prototype.find;
Array.prototype.find = function (lambda) { return this.findF(z(lambda)); }

Array.prototype.findIndexF = Array.prototype.findIndex;
Array.prototype.findIndex = function (lambda) { return this.findIndexF(z(lambda)); }

Array.prototype.keysF = Array.prototype.keys;
Array.prototype.keys = function (lambda) { return this.keysF(z(lambda)); }

Array.prototype.mapF = Array.prototype.map;
Array.prototype.map = function (lambda) { return this.mapF(z(lambda)); }

Array.prototype.reduceF = Array.prototype.reduce;
Array.prototype.reduce = function (lambda) { return this.reduceF(z(lambda)); }

Array.prototype.reduceRightF = Array.prototype.reduceRight;
Array.prototype.reduceRight = function (lambda) { return this.reduceRightF(z(lambda)); }

Usage

This kind of syntax is best used with functional-type programming with methods like map and filter.

1
2
3
4
5
6
7
8
9
var numbers  = [1, 2, 3, 4, 5, 6, 7, 8, 9];

// These 3 lines will all do exactly the same thing:
numbers.map(function (a) { return a * 2; }); // plain old javascript
numbers.map("a => a * 2"); // explicit variable names
numbers.map("a * 2"); // implicit variable name of 'a'

// This allows for simple, readable, functional array manipulation.
numbers.filter('a % 2 == 0').map('a * 2').reduce("a + ', ' + b");

Caution!

This method is sketchy and has been built here as a proof of concept. It’s significantly slower than normal javascript anonymous functions and has plenty of other serious problems. Enjoy!


Numerical Methods and the Green Brain Project

I posted about the Green Brain Project a while ago before starting my placement there. Last week I finished, after spending 6 weeks working with the team investigating ways to improve their numerical simulation methods. For the interested I’ll summarise the work I’ve done. The full report can be found here.

The Spine-Creator tool

Spine-creator is a tool for building and simulating biological neural networks, built by Alex Cope of the Green Brain Project. It’s designed to be powerful, flexible and usable by non-programmers, with a graphical user interface to visualise and build the networks. ‘Components’ can be designed and connected together with other components, and the system can be compiled and simulated from within the program. The tool is open source and can be found on GitHub here.

Spine-Creator visual interface

The Spine-Creator visual interface

The behaviour of components are usually specified with systems of Ordinary Differential Equations (ODEs): mathematical systems that start at a given state and progress through time acording to the differential equations. For neuron models, there would typically be variables and equations describing how voltages, resistances and chemical concetrations change over time from initial conditions.

Simulating these systems therefore requires solving these sets of equations, which usually isn’t possible with the normal algebraic methods taught in school. Instead numerical methods are used, these are methods that calculate accurate estimates of how the system will behave, and are usually parameterised to give simulations of any desired accuracy.

Forward Euler

Forward Euler is the simplest and most common numerical method for solving ODEs, it is based on the definition of the differential, and can be written in a line or two of code (C#):

1
2
3
4
5
// Takes the value of a variable (initial) along with the rate-of-chage at that point (differential),
// returns an estimate of the value of the variable after a time-period (dt).
double forwardEuler(double initial, double differential, double dt) {
  return initial + dt * differential;
}

You can see that the amount the initial value changes by is proportional to the rate it is changing (differential) and the length of time it is changing for (dt). This method works, and can achieve any desirable accuracy if iterated over and over again with tiny values for dt. But has a number of stability problems when simulating particular systems. For example if Forward Euler is used to solve a Sine-Cosine system…

Sin-Cosine System

The result wildly and exponentially oscilates rather than stably moving between +/– 1.0. The Spine-Creator tool used the Forward Euler method, and this problem (along with others) was the motivation for the research project.

Runge Kutta methods

The main class of methods we investigated are called Runge-Kutta methods. They work by calculating the differential at multiple points between the intitial time and dt, and calculating an estimate of the state at the end of the time-step using a linear combination of those differentials. There are a number of algorithms in this class, the most popular being modified Euler and Runge-Kutta 4th Order. These methods provide more accurate, efficient and stable solutions and can be used with much larger values for dt.

Adaptive time-step methods

The methods described so far assume a value for dt has already been chosen. Adaptive time-step methods work by providing an estimate for the error introduced during a time period, and using that error estimate to suggest a new value for dt based on a given error-tolerance. These methods can, for example, use a very small dt during chaotic and difficult time-periods, and shift later to using a much larger dt when the system stabilises. The adaptive time-step method I investigated and implemented is called the Dormand-Prince Method. It can result in very efficient and very accurate solutions compared to similar fixed-time-step methods. Run on a single variable system (see the full report and code for details) the result clearly shows a changing time-step:

Sin-Cosine System

Visually, the dots get further apart as the system progresses and becomes more linear.

Parallelisation and Synaptic Delays

Part of the project’s aims are to simulate the systems in paralell on GPUs. The bottleneck to doing this as it stands is to do with the large degree of connectivity within the network; sharing data between otherwise unconnected sub-systems prohibits simulating them in isolation, as their behaviour will always depend on other neurons from another part of the network. We found a paper that describes a method for isolating subsystems for a maximum period of time, which restricts data-flow between components based on the synaptic delays in the network. Using this method could potentially aid the paralellisation of the simulation.

Awesome!

So overall it was a super cool project and I learnt a lot. It’s great to get back into mathsy-programming with an enthusiastic acedemic team. A big shoutout to the Green Brain Project and the SURE scheme for enabling this project and others for other students, I’m looking forward to working with the team again in my 3rd year at the University.

Code and full report

  • The methods mentioned were implemented in an isolated C++ program available on GitHub.

  • The full report which was handed in at the end of the placement is available here (pdf). This includes tons of details and extras that aren’t mentioned in this blog post.


Science, Morality and Politics

There’s a philosophical position that goes something like this:

A statement that has no predictive power, and therefore cannot be shown to be true or false, is meaningless.

This is an attractive stance for the scientist who spends his or her life investigating and refining predictive models; it suggests a true objectivity to the world. It narrows the scope of knowledge down to that in which they are an expert. And if there’s one undebatable truth in philosophy, it’s that there are no undebatable truths in philosophy. And even that’s debated.

Philosophical knowledge, by its very nature, is that set of knowledge that cannot be verified by experiment. It can’t have predictive power: if it did it would no longer be philosophical. To the hard-line scientist, this is the ultimate shoot-down to philosophy, it makes philosophy meaningless.

But there is a second role of knowledge missing from this view. Even without predictive power, knowledge and beliefs effect our behaviour and decisions. Enter moral philosophy: the world of knowledge that, despite not being verifiable, influences every action we make. Although, to the scientist, subjectivity may be seen as inherently negative, even the most stringent scientist will hold personal ethical beliefs, out of touch from their science. It’s the point of the quote that ended my last post; talking about the inability of science to guide our decisions, Feynman concludes: “… at the end, you must have some ultimate judgment.”

Moral framework

A moral framework is a set of beliefs – held by an individual – that can be used to ultimately answer all behavioural dilemmas. It fills the gap left by science, and exists as the ultimate end to why we do something the way we do. It adds a connection between our ideas of good and bad and the real world. To use a moral framework is to take a choice of possible actions, judge their worth according to the criteria within the framework, and choose the action of the most moral worth. As a computer scientist, it is analogous to a state-space search or the mini-max algorithm – both exist to make decisions in a simulated world based purely on some external measure of “success”.

Although moral frameworks are subjective, most of us share some intuitive, ambiguous ideas of right and wrong. It’s wrong to hurt someone. It’s wrong to take away someone’s freedom. Our actions may be judged by their consequences to people’s happiness. Our actions may also be judged on our intent. Some actions can be wrong in principle, regardless of circumstance. These views can all be true to different extents for different people. And people struggle internally with how to bind them together into a single unambiguous moral framework. It’s possible to try and reason and justify these views, but being within the realm of philosophical knowledge, it often results in unresolvable ideological food-fights.

Where does science come in?

It’s established that science alone cannot derive an objective moral framework or ideology. It’s the ultimate reason that political debate often seems so futile and endless. It creates a frustration amongst scientists, so used to seeking objectivity in their field that they call for scientific principles to be introduced into policy making – with the hope of ending this ongoing political shouting in favour of evidence based policy. But if our ideologies are personally chosen, what is the role of science in policy making?

A policy is not the same as an ideology. A policy is a set of real-world rules, which have real-world consequences hopefully pushing society closer to the ideals of the policy-creator. Say the politician in charge is a libertarian: he believes the worth of a policy should be judged by its effects in enabling individual freedom. The policy gives people freedom? GOOD. The policy takes away people’s freedom? BAD. The politician justifies his no-taxes, no-regulation economic policies with his ideology:

Taxes and regulation take away the freedom of individuals to trade with each other as they both see fit. Taking away money and enforcing our own rules on other people’s personal trades is wrong because they should have the freedom to trade with each other as they want, so long as the trade doesn’t take away the rights of others.

On its surface, it may look like this policy is only debatable on ideological grounds. But look deeper and we can see how this policy actually has two parts: The ideology is to promote freedom to individuals, the claim is that reducing taxes and regulation will result in satisfying these ends.

Say the policy is implemented. People are free to trade as they see fit within the lenient restraints of the new ‘libertarian’ society. Say, over time the economy grows and large companies are allowed to form and merge. It turns out, in this society, that it is in the large companies’ profit interests to fix prices and create repressive monopolies – harvesting all wealth from the less well-off and restricting their employment options so, given a free choice under the law, citizens have to choose to work for the monopolistic companies. It’s the only option to survive. Would the libertarian politician – who values social mobility and individual freedoms – consider this society ‘free’? If he’d have known that the long term consequences of his policies would be that the 99% do not have a practical choice over their lifestyle or employment?

It’s a hypothetical story, the point is not about economics, and it is especially not a criticism aimed at the libertarian ideology. Instead, the story illustrates that an acceptable justification of a policy rests on more than just an ideology. It should rest on evidence that the policy’s consequences work towards the ideology. The failure of the above policy in achieving its ideological aims was not a fault of the ideology itself, it was the fault of the politician’s naive understanding of the policy’s consequences.

Evidence based policy

When scientists like me call for science based policy, we should call for the consequences of policies to be investigated scientifically, not their ideological foundations. A country being run without this step of investigation cannot make educated choices between policies. The naivety of politicians regarding the effects of their own policies is solvable with science, and it is here that science based policy should flourish. Ignorance about the limitations of science, which politicians may be more aware of than scientists, unnecessarily weakens the otherwise valid and important efforts of scientists in improving the quality of our policies. Science should have its place in politics, but misunderstandings and confusions about the roles of science and ideology need to be tackled before the scientific method can be fully taken advantage of within politics.


I, for one, welcome our new insect overlords

This post was written as a university assignment April 2014. I’ve been offered a 6 week internship working for the Green Brain Project in Summer 2014.

Why creating autonomous honey bee quad-copters is the next logical step.

From The Terminator’s cold hearted cyborg assassin, to the heart-throb operating system Samantha in Her, science fiction has long been predicting the rise of humanoid robots. The huge potential for AI and robotics to either improve or destroy our society has been illustrated countless times, including Isaac Asimov’s dramatic I-robot – with a utopian work-free society turning quickly to a robot-controlled utilitarian dictatorship. However, regardless of the disruption these technologies will bring, research and innovation into autonomous robotics will continue at an always increasing rate.

Sheffield’s Green Brain project aims to contribute to this revolution by reverse engineering the brain of a honey bee, and creating autonomous flying honey bee robots controlled by a simulation of the brain. Within the Kroto Innovation center at the University of Sheffield, their setup is impressive: A clinically white observation room, with a large window on one side, sits almost empty apart from three powerful, expensive gaming PCs stacked in the corner, and a grey metallic quad-copter sitting apprehensively in the center.

Quadcopter flying

The Green Brain’s Quad-copter

The quad-copter is the honey bee – kitted up at every corner with sensors and receivers, mimicking accurately those of an actual bee. The gaming PCs are its brain – simulating a large and complex neural network on massively parallel GPUs.

This is the result of over two year’s hard work from the Green Brain Team – a small but impressive team of academics and engineers working at Sheffield. This work, they claim, is the next logical step in driving forward our understanding of our minds, and perhaps bringing us closer to the society abound with robots predicted unanimously by science fiction.

… More →

Maths ain’t magic

Imaginary numbers and the nature of mathematics

I first heard of imaginary numbers when someone showed me the Mandelbrot Set:

Mandelbrot visualisation

They described it to me:

The x axis is for the real numbers, and the y axis is for the imaginary numbers. The colour of each pixel is determined by the properties of the imaginary and real numbers it represents.

My reaction was confusion, as it should have been. That picture sure doesn’t look imaginary. And how can these numbers exist on a computer if they’re imaginary? Maybe I’m not smart enough to understand, maybe it’s a concept only accessible to the minds of mathematitions. What are imaginary numbers?

Well… They’re made up numbers that don’t really exist like normal numbers do. There’s no number that can be squared into a negative number, so we make up the imaginary numbers!

I learnt to program in my teens and when I wrote my own mandelbrot set visualiser I realised it doesn’t really need imaginary-made-up numbers after all. All it needs is two standard real numbers and a formula.

What’s ‘imaginary’ about imaginary numbers then? Well ‘real’ negative numbers don’t have ‘real’ square roots, so we call their roots ‘imaginary’. It’s a name. Are they more imaginary that the ‘real’ numbers? No. You don’t need a special type of calculator or a special kind of pen to calculate with imaginary numbers. It’s really not the best naming decision: calling a fundamental and useful concept ‘imaginary’ scares and confuses people away from trying to understand it. The crux to understanding imaginary numbers is to realise how normal and unimaginary they really are.

Mathematics studies formal abstractions – mental constructions that have been defined so carefully and unambiguously that their properties become objective. 1, 2, 3 and the rest of the counting numbers, they’re all abstractions: concepts we create to categorise, percieve and understand the world. We use abstractions to represent things, and whether abstractions are ‘real’ or not doesn’t even matter; they’re all as real or unreal as each other. Imaginary numbers and real numbers exist equivalently as mathematical abstractions; useful tools to represent the world we live in.

It’s the way the mind works. We develop concepts and abstractions to represent the world and its behaviour – mathematics studies the subset of these concepts that have been formally defined. To know what infinity is, or what an imaginary number is, is really just knowing how the mind uses abstractions to represent our surroundings.

The view that mathematics is just concepts and abstractions in our psychology is far from the only accepted philosophy. There’s plenty of schools of thought around what maths is. For me, however, any existential or borderline-spiritual view of mathematical knowledge has caused more confusion and gotten in the way of learning the actual maths.


Gresham’s Law and Bitcoin deflation

You’re at the local store buying some bread and milk. The cashier asks you for £3.35. You open your wallet and you’ve got two £5 notes:

Crusty old £5:

Old note

Crisp new £5:

New note

Which one do you give to the cashier?

Unless you’re feeling kind – that is, you’re not an economic man – you’d want to get rid of your crusty old £5 note; they’re worth the same amount to the cashier, so you’d be a fool to give up the shiny new one for nothing extra. This is the basis of a paradoxical truth in economics: the measure of a currency’s success is if it is used in transactions, yet the shiny, new, better £5 notes will get hoarded while the old ones get traded. People want to get rid of bad currency, which means it’s used more in transactions. Enter Gresham’s Law:

Gresham’s law is an economic principle that states: “When a government overvalues one type of money and undervalues another, the undervalued money will leave the country or disappear from circulation into hoards, while the overvalued money will flood into circulation.” It is commonly stated as: “Bad money drives out good”.

We can rewrite the above scenario with bitcoin replacing the new shiny £5 note, and our standard fiat currencies as the old one. Presuming bitcoin is accepted universally and is as easy to pay with as paying by card, which one should you choose?

Bitcoin is digital gold: there is a finite supply. Cash is subject to continuous steady inflation: its value will always be gradually decreasing. So assuming the novelty of spending bitcoin has worn off in this future, it would make sense to keep your value-preserving-bitcoin and get rid of your cash. This is one reason why economists critisise the deflationary (or at least zero-inflation) nature of bitcoin.

The reality is complex, however, and this property doesn’t necessarily detract from its other advantages. But this is one reason for believing that even if bitcoin is successful, it won’t replace other currencies too easily.


Big Pharma - Book review

Bad Pharma book cover

We take pride in our western medicine. We believe it is evidence based, objective, and working with the correct incentives. Instead, Ben Goldacre writes: (p238)

… We have occasional, small brief trials, in unrepresentative populations, testing irrelevant comparisons, measuring irrelevant outcomes, with whole trials that go missing, avoidable design flaws, and endless reporting biases that only persist because research is conducted chaotically, for commercial gain, in spuriously expensive trials. The poor-quality evidence created by this system harms patients around the world. And if we wanted, we could fix it.

This is a powerful book, with a clear and unforgiving message: Western medicine is not as scientific as we are told; the system is corrupt and industries work with their own profits in mind.

Bad Science

Goldacre is known for his blog – Bad Science, where he debunks claims ranging from the proficiency of homeopathy to the popular nutrician supplements of “Dr” Gillian Mckieth. He turns his attention here to the more serious mispractices going on in western medicine as a whole.

It is a long book, covering over 350 pages excluding notes, illustrating his points with shocking stories and backing them up with hard evidence where possible. He covers many topics, ranging from the widely unsolved positive publication bias in academic journals, to ‘regulatory capture’: how pharmacutical companies become their own regulators. The book ends with an unforgiving tour-de-force breakdown of how marketting is used to decieve doctors and patients.

Goldacre knows this isn’t an academic paper, and he communicates to his readers early on that it shouldn’t be treated as one. However the result is a book that treads a confusing line between sensationalist popular science and a seminal analysis of a very serious problem. The points in this book, however, need to be treated with respect by the people in power, even though it is aimed at the general public.

I really enjoyed and learnt a lot from this book, and I’d recommend it especially to those involved in medicine – who will be in a better position than I am to comment on the problems raised here, and maybe do something about them.

Buy Ben Goldacre’s Bad Pharma on Amazon.co.uk


Bitcoin as a product

Please note I’m not an expert on Bitcoin. I write about this stuff for fun.

I’ve been close to investing in Bitcoin over the past couple of months; its promise as a technology comparable to the internet of 20 years ago is severely tempting. If it ever gains wide acceptence, even a small investment could be worth big money in the coming decades. But I’ve given up – for the moment at least – on that hope. Here’s why:

Money for the people

Bitcoin is oft peddled with an economic libertarian ideology. Our fiat currencies are controlled and regulated by the banks and governments, adding fees and friction to transactions. By relying on the system as it is, we hand our power to an unrepresentative minority who can profit from and manipulate our fiat currencies, not necessarily with the majority’s best interests in mind. Bitcoin is a decentralised currency that can’t be controlled by an individual party, it lets regular folk and small businesses make transactions to each other without needing to trust a third party. This is true, but it’s not the whole story.

The current system

To gain a real understanding of where bitcoin sits as a technology, we need to understand how the current system works, and what problems (if any) bitcoin solves.

First, lets look at hard cash – the stuff we carry around in our wallets. Already, cash has some of the essential properties that bitcoin claims: It’s untracable, unregulatable, and anonymous; perhaps to a greater extent even than bitcoin. Cash is used for drugs, weapons, money laundering, untaxed income, you name it. Nobody can charge a small business for the right to accept cash. Cash falls down at two hurdles, however: it’s not digital, and control of its production is centralised. We’ll look at what this really means in a second.

Now lets look at banks and the digitised money they offer through debit and credit cards. By digitising our cash and giving it to the banks, some important things have now changed: it is no longer untracable, anonymous, and unregulatable. By trusting the banks with our money we have sacrificed this for some serious benefits: They keep our money more secure than we could realistically keep cash, they offer fast digital transactions around the world and online, often with no fees or friction, and they offer interest on our savings at no extra cost. The currency is still the same as our cash, so production doesn’t change hands. The important thing to realise here is that banks are offering a real service to us in securing and digitising our cash, and in some sense bitcoin must solve a problem not already solved by the combination of modern banking and cash.

… More →

Chrome extension #1: Subnito

I decided to create a chrome extension last week, it was surprisingly simple. Chrome extensions basically give you a very simple way to inject your own javascript into whatever webpages you like, plus have access to chrome’s own APIs for playing with tabs or storing data etc. The whole extension uses nothing but javascript (you can add jquery etc. if you like), and HTML. Even the .manifest file is JSON.

Subnito was my first serious chrome extension. It uses the reddit API to detect if you’re browsing an 18+ subreddit on reddit.com and pushes you into incognito mode if so.

The bulk of the work is done in one short file.

background.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// Trigger this code onBeforeNavigate to any url. This happens even before the web request has been made to increase speed.
chrome.webNavigation.onBeforeNavigate.addListener(function(details) {
  var url = details.url;
  var bare_url = url.replace("https://", "").replace("http://", ""); // remove protocol
  var match_string = "www.reddit.com/r/";
  if (bare_url.substring(0, match_string.length) === match_string) { // check if we're on a reddit subreddit
      var subreddit = bare_url.split('/')[2]; // extract the subreddit name
      // request a json document giving the details of the sub using the reddit API
      $.getJSON("http://www.reddit.com/r/" + subreddit + "/about.json", function(data) {
          if (data['data']['over18']) { // check the boolean over18 flag in the returned json
              chrome.tabs.remove(details.tabId, function() { }); // remove non-incognito tab
              chrome.history.deleteUrl({ "url": url}, function() { }); // delete url from history completely
              chrome.windows.create({"url": url, "incognito": true}); // open incognito window
          }
      });
  }
})

Full source code on GitHub

Install on Chrome Store


Hello World!

… From the beautiful mix of technologies that make this site

I’ve put off making a website for so long (I’ve had this domain pointing to nothing for 2+ years) because I believed that if I was going to do it, I’d do it properly and build the whole thing myself. But I dislike CSS and I’ve always found more fun things to do. I’ve finally found a few technologies cool enough for me to get excited about using, and it only took an hour to get this fancy looking site running.

Octopress

Octopress is a fantastic (free, open source) blogging platform built on Jekyll: a fantastic (free, open source) static site generator that uses Ruby: a simple, compact, open-source language similar to python. Pages and posts are written in markdown (an extremely simple syntax for writing basic documents) and auto-compiled into a static site. There’s loads of free octopress themes available at opthemes.com. The theme on this page is Octoflat, a minimal flat theme built using Twitter Bootstrap – which, finally, is a large front-end web development library which makes it much easier to build functional good looking sites. Beautiful! It’d have taken me years to build this the proper way.

Heroku

Heroku is a hosting website that supports ruby apps and deploying with a ‘git push heroku master’. Couldn’t be simpler.

This website’s sourcecode is available (complete with all posts in markdown) on my GitHub page, but if you want to make a copycat of some sort you’d be best working from the octopress default template directly.