SickSad

Cameron Speaks

The Tories attempted to remove their 2010 pre-election speeches and press releases from the internet. The robots.txt file got changed to politely ask any web scraping bots to forget any earlier copies of their website. This removed previous copies of their website from many internet archives such as the Wayback Machine. Fortunately the British Library has kept copies..

So, for your convenience, here’s an archive of all the speeches made by David Cameron pre-electon in plain text format.

But what to do with this archive? We decided to build a Markov chain driven Cameron-Bot that spouts nonsense speeches in a barking, authoritative, mechanical voice.

Because that’s what the world needs.

You’re welcome.

Note – We built this a good few months ago, just after the story about the speeches being removed broke, when maybe it would of been more culturally relevant, but never got around to blogging about it. But then again why believe a fucking thing we write.

Don’t Worry Government, I Got This Porn Filter Sorted

So i hear the UK government wants to make a porn filter. About bloody time i reckon. I’m fed up of happily browsing the Internet for boobs, only to have non-porn related subject matter thrust down my face hole.

So taking inspiration from other great Internet filtering nations such as North Korea, China, Syria, Iran, Cuba, Bahrain, Belarus, Burma, Uzbekistan, Saudia Arabia and Vietnam I decided to help out the UK government and build an Internet filter that only allows pornographic material through.

You’re Welcome.

Setting Up

All code is available here https://github.com/SickSad/UKPR0nFilter

Just follow this simple step-by-step video walk-through and you’ll have a porn filter running in no time!

Nerd Stuff

The filter is a dns server which checks all queries against the OpenDNS FamilySheild DNS server. Any request that is denied by OpenDNS is then allowed by our DNS server, and any request allowed by OpenDNS is blocked by us.

The server itself it built using the python Twisted framework which handles both the DNS requests and acts as a simple web-server to host the denial page.

pns.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#!/usr/bin/env python
#
# pns.py is a simple DNS server which only allows pornographic material through.
#
# Bits and pieces robbed from here:
# https://gist.github.com/johnboxall/1147973
# http://twistedmatrix.com/trac/wiki/TwistedWeb
import socket
import dns.resolver as DNS

from twisted.internet.protocol import Factory, Protocol
from twisted.internet import reactor
from twisted.names import dns
from twisted.names import client, server
from twisted.web import server as webserver
from twisted.web import resource
import sys
index = open('index.html').read()

class Simple(resource.Resource):
    isLeaf = True
    def render_GET(self, request):
        return index


class DNSServerFactory(server.DNSServerFactory):

    def gotResolverResponse(self, (ans, auth, add), protocol, message, address):
        qname = message.queries[0].name.name
        print qname
        blocked_resolver = DNS.Resolver()
        blocked_resolver.nameservers = ['208.67.222.123','208.67.220.123']


        porn = False
        results = blocked_resolver.query(qname)
        for r in results:
          print r
          if str(r).startswith('67.215.65'):
            print "PRON"
            porn = True

        if porn == False:
          print "NOT PRON"
          for answer in ans:
            if answer.type != dns.A:
                continue
            answer.payload.address = socket.inet_aton(ip_address)
            answer.payload.ttl = 60
        #address = ('127.0.0.1', 43160)
        args = (self, (ans, auth, add), protocol, message, address)
        result=server.DNSServerFactory.gotResolverResponse(*args)
        print result
        return result


verbosity = 0

ip_address = ""
if len(sys.argv) > 1:
    ip_address = sys.argv[1]
else:
    ip_address = '127.0.0.1'

resolver = client.Resolver(servers=[('8.8.8.8', 53)])
factory = DNSServerFactory(clients=[resolver], verbose=verbosity)
protocol = dns.DNSDatagramProtocol(factory)
factory.noisy = protocol.noisy = verbosity

reactor.listenUDP(53, protocol)
reactor.listenTCP(53, factory)
site = webserver.Site(Simple())
reactor.listenTCP(8080, site)
reactor.run()

Long Exposure Light Animation

Designs for all the hardware and software used in this blog post is available here under a Creative Commons Attribution-ShareAlike 3.0 Unported license.

Introduction

I have been experimenting with using a computer controlled delta robot to draw long exposure light animations.

By using a precisely controllable robot to draw each frame of the animation a level of precision is achieved which gives the animation a quality similar to that of CGI animations, while still maintaining natural lighting.

The process of drawing a single frame can be seen in the following video. This would of course take place in a darkened room with a camera set to take a long exposure photograph pointing towards the base of the delta robot.

Delta Robot

The delta robot used to create these animations is a custom design. All the hardware designs and control software and firmware are available here. The electronics is all based around an STM32F4 Discovery board because it has a floating point unit which is required to perform the inverse kinematics code fast enough using floating point arithmetic. The electronics for the project is largely undocumented as the design evolved while the robot was being built. Hardware connections to the STM32F4 Discovery board can be found in the “hardware.h” file in the git repository. The motor drivers used are Pololu A4988 driver boards.

Software

The delta robot is controlled using GCode. The following python script was used to generate the GCode for the animations. The spindle on/off command is used to control the LED. The GCode is then fed to the delta robot via a serial link, where it is then interpreted and the actions performed.

animation.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
#!/usr/bin/env python
import random
import math
from random import randint


def ease_in_quad(time, begin, change, duration):
    time = float(time)
    begin = float(begin)
    change = float(change)
    duration = float(duration)
    val = change*(time/duration)*(time/duration) + begin
    return val

def ease_out_quad(time, begin, change, duration):
    time = float(time)
    begin = float(begin)
    change = float(change)
    duration = float(duration)
    val = -change*(time/duration)*((time/duration)-2.0) + begin
    return val

def generate_circle(raidus=1.0, degree_steps=10):
    points = []
    points.append((raidus, 0, 0))

    for val in range(degree_steps - 1):
        angle = (360.0 / degree_steps) * (val + 1)
        r = math.radians(angle)
        x = points[0][0]
        y = points[0][1]
        z = points[0][2]
        x2 = x * math.cos(r) - y * math.sin(r)
        y2 = y * math.cos(r) + x * math.sin(r)
        z2 = z
        points.append((x2, y2, z2))
    return points

def generate_line(start=(0,0,50), end=(0,0,45)):
    points = [start, end]
    return points

def circle(position, radius=10.0, degree_steps=10):
    points = []
    face = []
    points = generate_circle(radius, degree_steps)
    points = add_offset(points, position)
    face = range(len(points))
    face.append(0)
    return points, face

def line(start, end):
    points = generate_line(start, end)
    face = range(len(points))
    return points, face


def add_offset(points, (x_offset, y_offset, z_offset)):
    new_points = []
    for index in range(len(points)):
        x = points[index][0] + x_offset
        y = points[index][1] + y_offset
        z = points[index][2] + z_offset
        new_points.append((x, y, z))
    return new_points


def add_shape((points, faces), (new_points, new_face)):
    if new_face == None or new_points == None:
        return (points, faces)
    new_face = [f+len(points) for f in new_face]
    faces.append(new_face)
    points += new_points
    return (points, faces)

def add_object((points, faces), (new_points, new_faces)):
    if new_faces == None or new_points == None:
        return (points, faces)
    for new_face in new_faces:
        new_face = [f+len(points) for f in new_face]
        faces.append(new_face)
    points += new_points
    return points, faces

def save_as_gcode(points, faces, filename = 'output.gcode'):
    spindle_on = "M3"
    spindle_off = "M5"
    f = open(filename, 'w')
    drawn = []
    for face in faces:
        for index, vertex_index in enumerate(face):
            if index == 1:
                f.write(spindle_on+'\n')
            vertex = points[vertex_index]
            x = vertex[0]
            y = vertex[1]
            z = vertex[2]
            cmd = "G1 X"+str(x)+" Y"+str(y)+" Z"+str(z)+ " F"+'3200.0'+'\n'
            f.write(cmd)
            drawn.append(cmd)
        f.write(spindle_off+'\n')
    f.close()


class ExpandingCircleAnimation:
    def __init__(self, position, start_tick, end_tick, tween_function, begin_radius, end_radius, degree_steps=20.0):
        self.start_tick = start_tick
        self.end_tick = end_tick
        self.tween_function = tween_function
        self.begin_radius = begin_radius
        self.end_radius = end_radius
        self.position = position
        self.degree_steps = degree_steps

    def get(self, tick):

        if tick < self.start_tick or tick >= self.end_tick:
            return None, None

        time = tick - self.start_tick
        begin = self.begin_radius
        change = self.end_radius - self.begin_radius
        duration = self.end_tick - self.start_tick
        radius = self.tween_function(time, begin, change, duration)
        points, faces =  circle(self.position, radius, int(self.degree_steps))
        return (points, faces)


class FallingLineAnimation:
    def __init__(self, position, start_tick, end_tick, tween_function, begin_z, end_z, length = 5.0):
        self.start_tick = start_tick
        self.end_tick = end_tick
        self.tween_function = tween_function
        self.begin_z = begin_z
        self.end_z = end_z
        self.position = position
        self.length = length

    def get(self, tick):

        if tick < self.start_tick or tick >= self.end_tick:
            return None, None

        time = tick - self.start_tick
        begin = self.begin_z
        change = self.end_z - self.begin_z
        duration = self.end_tick - self.start_tick
        z = self.tween_function(time, begin, change, duration)
        l_start = (self.position[0], self.position[1], z)
        l_end = (self.position[0], self.position[1], z - self.length)

        points, faces =  line(l_start, l_end)
        return (points, faces)

class RainDropAnimation:
    def __init__(self, position, start_tick, end_tick, height=50.0, radius=30.0, loop=False, debug =False):
        self.loop = loop
        self.debug = debug
        self.start_tick = start_tick
        self.end_tick = end_tick
        self.height = height
        self.radius = radius
        self.duration = self.end_tick - self.start_tick
        self.drop = FallingLineAnimation(position, start_tick, start_tick+self.duration/2.0, ease_in_quad, position[2]+height, position[2], 5.0)
        self.ripple1 = ExpandingCircleAnimation(position,(start_tick+(self.duration/2.0)),(start_tick+self.duration),ease_out_quad, 0.0,radius)

    def get(self, tick):
        if self.loop:
            tick = math.fabs(math.fmod(tick,self.duration))
        if self.debug:
            print tick
        points = []
        faces = []
        points, faces = add_shape((points, faces), self.drop.get(tick))
        points, faces = add_shape((points, faces), self.ripple1.get(tick))
        return points, faces



def main():
    points = []
    faces = []

    d1 = RainDropAnimation((20.0,20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d2 = RainDropAnimation((0.0,20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d3 = RainDropAnimation((-20.0,20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)

    d4 = RainDropAnimation((20.0,0.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d5 = RainDropAnimation((0.0,0.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d6 = RainDropAnimation((-20.0,0.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)

    d7 = RainDropAnimation((20.0,-20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d8 = RainDropAnimation((0.0,-20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)
    d9 = RainDropAnimation((-20.0,-20.0,0.0), 0.0, 30.0, 80.0, 20.0, loop=True)


    points = []
    faces = []
    duration = 60

    o1 = randint(0,30)
    o2 = randint(0,30)
    o3 = randint(0,30)
    o4 = randint(0,30)
    o5 = randint(0,30)
    o6 = randint(0,30)
    o7 = randint(0,30)
    o8 = randint(0,30)
    o9 = randint(0,30)

    for tick in range(30):
        points = []
        faces = []
        points, faces = add_object((points, faces),d1.get(tick+o1))
        points, faces = add_object((points, faces),d2.get(tick+o2))
        points, faces = add_object((points, faces),d3.get(tick+o3))
        points, faces = add_object((points, faces),d4.get(tick+o4))
        points, faces = add_object((points, faces),d5.get(tick+o5))
        points, faces = add_object((points, faces),d6.get(tick+o6))
        points, faces = add_object((points, faces),d7.get(tick+o7))
        points, faces = add_object((points, faces),d8.get(tick+o8))
        points, faces = add_object((points, faces),d9.get(tick+o9))

        filename = str(tick).zfill(3)+".gcode"
        save_as_gcode(points, faces, filename)


if __name__ == "__main__":
    main()

Long Exposure Laser Photography

Introduction

The following photos were taken using long exposure photography and an electronically controlable sweeping laser line. This produces an effect similar to the rolling shutter effect which CMOS sensors are prone to, albiet over a considerably longer exposure time.

Hardware

The hardware for this project is incredibly simple. The only part which may be tricky to obtain is the laser line generator. The one used in this project was obtained by disassembling a laser spirit level from a hardware store. Here is the rundown of parts:

  • Arduino
  • Laser line generator
  • 220 Ohm resistor
  • Continous rotation servo

A normal servo can be used with a few minor changes to the code in the software section. A continous rotation servo was used beause it was available at the time.

The 220 Ohm resistor is used to current limit the 5v supply to the laser line generator. This isn’t an ideal solution, and you may need to tweak the value of the resistor to get the optimum brightness from the laser line.

Once the laser line was attached to the servo and the whole assembly suitably mounted it looked like the following.

Software

Like the hardware, the software for the project is trivial. The arduino code simply rotates the servo between two positions. It should be noted that because a continous rotation servo is used the positions rotated between may vary for each cycle. The delays in the loop() function can be used to tweak these positions.

Arduino Servo Control
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Arduino Servo Control
// By sicksad http://www.sicksad.com
// This code is in the public domain.

#include <Servo.h> 

Servo servo;

int stop_speed = 90;
int speed = 10;

void setup()
{
  servo.attach(9);
}

void loop()
{
  servo.write(stop_speed+speed);
  delay(2100);
  servo.write(stop_speed-speed);
  delay(1500);
}

Further Work

The hardware for this was built in less than an hour from parts that were to hand at the time. To take this work further I’d be intereset in building a multi degree-of-freedom robotic head to mount the laser on, using stepper motors or high quiality servos to provide precise controlled motion.

The cameras shutter is not synchonised to the motion of the laser line. To fix this an IR remote to trigger the shutter could be implemented using the Arduino and an IR LED.

Approximating Images With Random Lines

I’ve been experimenting with iteratively generating approximations of images using black lines.

The algorithm works by randomly placing 40 black lines on 40 copies of a blank source image, each of those 40 is then compared to the goal image using SSIM to measure similarity. If any of the 40 are more similar than the source image that image is used as the source for the next iteration of the algorithm. If none of the 40 are more similar then the algorithm repeats with the original source image until a closer similarity is found.

SSIM was chosen as the similarity measure over the traditional PSNR or mean squared error measures because it is a better suited (although not perfect) method for measuring the similarity between two image as perceived by the human eye.

Plotting the change in SSIM over 9000 iterations results in the following fairly predictable graph.

From this graph it can be seen that beyond roughly 3500 iterations little extra structural similarity is obtained.

Comparison with PSNR & MSE

The following two image show the same algorithm but run using means squared error and psnr respectively.

As you’d expected the images are very similar as they are essentially the same metric but scaled differently.

Plotting the mean squared error against iterations produces a similar graph to one for SSIM, but note the inversion of the shape due to mean squared error measuring dissimilarity.

Also notable is how the curve of the graph flattens much quicker (at around 1500) when compared to SSIM.

In an application such as this where the end result is inherently subjective it is difficult to make a strong assertation about the best similarity measure for this purpose. Personally I prefer the results created by SSIM as I feel it captures some of the finer facial features which helps to produce a more recognizable end result.

Code

The code for this project was written in python based mainly on the PIL and pyssim libraries. The following is a version of the code used to generate the above images with a few bits of logging code removed.

The number of iterations is hard coded to 9000 and the number of images in a generation is set at 40. These are fairly arbitrary decisions and can be easily altered in the code for experimentation. Also included in the code are two methods for computing psnr and mean squared error.

draw.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
#!/usr/bin/env python

from PIL import Image
from PIL import ImageDraw
from random import randint
import ssim
import os
import math
import numpy as np

def generate_line(max_width, max_height, max_length=50):
  start = (randint(0,max_width),randint(0,max_width))
  length = float(randint(0, max_length))
  angle = math.radians(float(randint(0,360)))
  x = length * math.cos(angle) + start[0]
  y = length * math.sin(angle) + start[1]
  line = [start,(x,y)]
  return line


def mean_squared_error(img1, img2):
    total =0
    size = img1.size
    if img1.size != img2.size:
        raise Exception("size of images must be the same")
    if img1.mode != 'L':
        img1 = img1.convert('L')
    if img2.mode != 'L':
        img2 = img2.convert('L')

    img1 = np.array(img1.getdata())
    img2 = np.array(img2.getdata())

    return np.mean(np.absolute(img1 -img2))/255.0

def psnr(img1,img2):
    mse = mean_squared_error(img1,img2)
    return 10.0*math.log10(1.0/mse)

def main():
  img = Image.open("kitten.png")
  result_img = Image.new("RGB",img.size)

  draw = ImageDraw.Draw(result_img)
  draw.rectangle([(0,0),result_img.size],fill=(255,255,255))
  max_width, max_height = result_img.size

  for i in range(9000):
      best_img = result_img.copy()
      best_score = ssim.compute_ssim(img,result_img)

      for l in range(40):
          temp_img = result_img.copy()
          draw = ImageDraw.Draw(temp_img)
          line = generate_line(max_width, max_height)
          draw.line(line,width=1, fill=(0,0,0))
          score = ssim.compute_ssim(img,temp_img)
          print score
          if score > best_score:
              score = best_score
              best_img = temp_img.copy()
      
      result_img = best_img
      output_name = str(i).zfill(6)+'.png'
      result_img.save(output_name)

  result_img.show()
  img.show()


if __name__ == "__main__":
  main()

Installing Dependencies on Ubuntu

Pyssim can be installed through pip (the python package index). To get pip run:

sudo apt-get install python-pip

To speed things along I recommend installing the following packages before installing pyssim:

sudo apt-get install python-matplotlib python-scipy python-numpy

Finally run the following command to install pyssim via pip:

sudo pip install pyssim