Rafal Spacjer blog

{ skirmishes with code }

Leiningen: Working With Local Repository

Leiningen is de facto standard for creating and managing projects in Clojure. To create new project we can simply write:

1
lein new my-app

and a basic structure of the project is created for us. Now to add any dependencies (and download them from Maven or Clojars repository) we need to modify project.clj file. Let say we want to generate markdown from a string, so we need to add markdown-clj project. To do so we need to modify project.clj file in our project:

1
2
3
4
5
6
7
8
(defproject my-app "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [markdown-clj "0.9.63"]]
   :main ^:skip-aot my-app.core)

then we can write such code in \my-app\src\my_app\core.clj file:

1
2
3
4
5
6
(ns my-app.core
  (:require [markdown.core :as mark]))

(defn -main []
  (print
   (mark/md-to-html-string "#Header")))

Everything works great! But now for unknown reason, temporary, we want to modify md-to-html-string function (from markdown-clj/src/markdown/core.clj), to convert all our text to upper case. Figuring out how to do that can take a while (at least for me it wasn’t so intuitive how to change depended code), so I will show you.

First we need to clone git repository to local disk.

1
git clone https://github.com/yogthos/markdown-clj.git

Then we can modify md-to-html-string function in core.clj file:

1
2
3
4
5
6
7
8
9
(defn md-to-html-string
  "converts a markdown formatted string to an HTML formatted string"
  [text & params]
  (when text
    (let [input (new StringReader text)
          output (new StringWriter)]
      (apply (partial md-to-html input output) params)
      (clojure.string/upper-case
       (.toString output)))))

and change a version name in project.clj file – to not interfere with the original one:

1
2
3
4
5
6
7
(defproject markdown-clj "0.9.63-SNAPSHOT"
            :description "Markdown parser"
            :url "https://github.com/yogthos/markdown-clj"
            :license {:name "Eclipse Public License"
                      :url "http://www.eclipse.org/legal/epl-v10.html"}
                      :dependencies [[org.clojure/clojure "1.6.0"]]
                      ....)

and the last (but most important part) is to install this library in our local Clojure repository. To do that we need to be in markdown-clj project directory (where the project.clj file is) and write:

1
lein install

This will install jar and pom of markdown-clj project, in version 0.9.63-SNAPSHOT into local repo. We can use it now, by simply specifing correct version in our project.clj file:

1
2
3
4
5
6
7
8
(defproject my-app "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [markdown-clj "0.9.63-SNAPSHOT"]]
               :main ^:skip-aot my-app.core)

After this the modified version of markdown-clj project will be used. When we execute this code:

1
(mark/md-to-html-string "#Header")

we will get:

1
<H1>HEADER</H1>

This can be useful if you need temporary fix a bug or test your future pull request.

Wrocsharp 2015

On the 12 of March I’ve attended Wroc# 2015 – a free software developer conference organized by Objectivity company. This was the first edition (and I hope not the last!) of this event, but the organization was so good, that you would rather say that it’s the next one. The organizers provide free food, drinks, gadgets and even an after party with live music and table football tournament. Big thanks for that!

Of course infrastructure is only an addition to the sessions that define conference content. Hera are my opinions about those:

Chris presented his point of view on HTML5 and its benefits over native, mobile, apps. It was also a very brief introduction to Microsoft new HTML browser – Spartan. It contains not many new informations, but overall I liked this talk.

  • “Little changes to make your app a lot faster”Matt Ellis

This presentation focused on micro-optimizations of C# code. Matt showed code samples and the ways you could improve performance of them (especially the memory and garbage collector usage). Probably you won’t use this knowledge very often in your code, but it’s good to know how C# compiler works and how you can improve parts of your critical code. This talk was good, but without anything above that.

On stage coding session, showing power of F# type inference and how type system can guide you to better solutions. Right now I’m interested in functional languages (although more in dynamic ones like Clojure) so this was ideal session for me. Be warn only that it requires your full attention to understand Mark’s thinking process.

This was my second time (previous on DevDay 2014) when I saw Dan live on stage and again I think he’s a great speaker. In this light and entertaining talk he pointed directions on which good programmer should focus during his career. For me it was the best speech of the day.

Quite complete overview of new version of ASP.NET ecosystem (mvc, web api). If you don’t know a lot about next version of Microsoft web framework then this talk is for you. For me this wasn’t anything new, but even though I think Maurice did a good job.

I’m not a big fan of AngularJS framework (sounds like a good topic for a blog post) but this talk wasn’t really about Angular. It was about data binding in this framework and how it works – which is very interesting thing! Chris, even with technical problems, did a great live – coding session. He started with empty text file and finished with working data binding solution in JavaScript. I like to understand how “the magic” works, so such presentation are for me.

  • Discussion panel

Unfortunately, the last speaker – Mark Rendle couldn’t come to Wroclaw (because of the flue). Instead of his talk there was a discussion panel with all speakers moderated by Dan North. Dan again showed his talent by selecting interesting topics and “forcing” others for answers. This was great finishing session and I suggest you to watch it when it’s available online.

I’m glad that I could be on this event. It was well spent day with people that care about what they do. Such events make that I want to be part of this awesome community! It would be great to be there next year! Thanks guys for organizing it!

The only sad news that day was that Terry Pratchett died. Rest in peace Sir Terry!

ClojureScript: JavaScript Interop

(this post was updated on 15th of March 2015)

As I mentioned before on this blog, I’m in the ongoing process of learning Clojure (and ClojureScript). To better understand the language, I’ve written small web application. For fun I decided that all my front end code will be written in ClojureScript. Because I needed to use external JavaScript API (Bing Maps AJAX Control) I wrote quite a bit of JavaScript interop code – the syntax wasn’t obvious for me and I couldn’t find a place that have all that info, so I wrote this post. Please be warn, this is quite a long post!

JavaScript example

To easier understand all examples lets define simple JavaScript code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
//global variable
globalName = "JavaScript Interop";
globalArray = globalArray = [1, 2, false, ["a", "b", "c"]];
globalObject = {
  a: 1,
  b: 2,
  c: [10, 11, 12],
  d: "some text"
};

//global function
window.hello = function() {
  alert("hello!");
}

//global function
window.helloAgain = function(name) {
  alert(name);
}

//a JS type
MyType = function() {
  this.name = "MyType";
}

MyComplexType = function(name) {
  this.name = name;
}

MyComplexType.prototype.hello = function() {
  alert(this.name);
}

MyComplexType.prototype.helloFrom = function(userName) {
  alert("Hello from " + userName);
}

Global scope

ClojureScript defines special js namespace to allow accessing JavaScript types/functions/methods/objects defined in global scope (i.e. window object for browser).

1
(def text js/globalName) ;; JS output: namespace.text = globalName;

Creating objects

We can create JavaScript objects from ClojureScript by adding . to the end of constructor function:

1
(def t1 (js/MyType.)) ;; JS output: namespace.t1 = new MyType;

(note: at first I thought that this generated JS code was wrong, because of the lack of parentheses, but as it clarifies it’s valid – if constructor function doesn’t have arguments, then parentheses can be skipped)

with arguments:

1
(def t2 (js/MyComplexType. "Bob")) ;; JS output: namespace.t2 = new MyComplexType("Bob");

There is also a different way of creating objects, by using the new function (the name of JS constructor function should be without period):

1
(def my-type (new js/MyComplexType "Bob")) ;; JS output: namespace.my_type = new MyComplexType("Bob");

Invoking methods

To invoke a JavaScript method we need to prefix the name of the method with the . (dot):

1
(.hello js/window) ;; JS output: window.hello();

which is a syntactic sugar of:

1
(. js/window (hello))

To pass arguments to the function we do:

1
(.helloAgain js/window "John") ;; JS output: window.helloAgain("John");

or

1
(. js/window (helloAgain "John"))

Same thing can be done with created object:

1
2
(def my-type (js/MyComplexType. "Bob")) ;; JS output: namespace.my_type = new MyComplexType("Bob");
(.hello my-type)                        ;; JS output: namespace.my_type.hello();

Accessing properties

ClojureScript provides a few ways of working with JavaScript properties. The simplest one is to use .- property access syntax:

1
2
(def my-type (js/MyType.))  ;; JS output: namespace.my_type = new MyType;
(def name (.-name my-type)) ;; JS output: namespace.name = namespace.my_type.name;

similar thing can be achieved by aget function, which takes object and the name of the property (as a string) as arguments:

1
(def name (aget my-type "name")) ;; JS output: namespace.name = namespace.my_type["name"];

The aget allows also accessing nested properties:

1
(aget js/object "prop1" "prop2" "prop3") ;; JS output: object["prop1"]["prop2"]["prop3"];

the same thing (generated code is different) can be done by using .. syntax:

1
(.. js/object -prop1 -prop2 -prop3) ;; JS output: object.prop1.prop2.prop3;

You can also set a value of a property from the ClojureScript, to do this you can use aset or set! functions:

The aset function takes name of the property as a string:

1
2
(def my-type (js/MyType.))
(aset my-type "name" "Bob") ;; JS output: namespace.my_type["name"] = "Bob";

and the set! takes a property access:

1
(set! (.-name my-type) "Andy") ;; JS output: namespace.my_type.name = "Andy";

Arrays

The aget function can be also used for accessing JavaScript array element:

1
(aget js/globalArray 1) ;; JS output: globalArray[1];

or if you want to get nested element you can use it in this way:

1
(aget js/globalArray 3 1) ;; JS output: globalArray[3][1];

Nested scopes

This subject was a bit confusing for me. In my project I wanted to translate such a code:

1
var map = new Microsoft.Maps.Map();

to ClojureScript. As you can see the Map function is in nested scope. The idiomatic way of accessing nested properties is to use .. or aget functions but this can’t be done for constructor function. In such case, we need to use the dot notation (even it’s not idiomatic for Clojure code):

1
(def m2 (js/Microsoft.Maps.Themes.BingTheme.))

or with the new function:

1
(def m1 (new js/Microsoft.Maps.Themes.BingTheme))

If we write this expression like this:

1
(def m3 (new (.. js/Microsoft -Maps -Themes -BingTheme)))

we will get an exception:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 First arg to new must be a symbol at line
                core.clj:4403 clojure.core/ex-info
             analyzer.clj:268 cljs.analyzer/error
             analyzer.clj:265 cljs.analyzer/error
             analyzer.clj:908 cljs.analyzer/eval1316[fn]
             MultiFn.java:241 clojure.lang.MultiFn.invoke
            analyzer.clj:1444 cljs.analyzer/analyze-seq
            analyzer.clj:1532 cljs.analyzer/analyze[fn]
            analyzer.clj:1525 cljs.analyzer/analyze
             analyzer.clj:609 cljs.analyzer/eval1188[fn]
             analyzer.clj:608 cljs.analyzer/eval1188[fn]
             MultiFn.java:241 clojure.lang.MultiFn.invoke
            analyzer.clj:1444 cljs.analyzer/analyze-seq
            analyzer.clj:1532 cljs.analyzer/analyze[fn]
            analyzer.clj:1525 cljs.analyzer/analyze
            analyzer.clj:1520 cljs.analyzer/analyze
             compiler.clj:908 cljs.compiler/compile-file*
            compiler.clj:1022 cljs.compiler/compile-file

Creating JavaScript objects

There are many cases where we need to pass JavaScript object to a method from ClojureScript. In general ClojureScript works with its own data structures (immutable, persistent vector, map, set etc.) that can be converted to plain JS objects. There are several ways of doing it.

If we want to create a simple JavaScript object from the list of key value pairs we can use js-obj macro:

1
(def my-object (js-obj "a" 1 "b" true "c" nil)) ;; JS output: namespace.my_object_4 = (function (){var obj6284 = {"a":(1),"b":true,"c":null};return obj6284;

Note that js-obj forces you to use strings as keys and basic data literals (string, number, boolean) as values. The ClojureScript data structures won’t be changed, so this:

1
(def js-object (js-obj  :a 1 :b [1 2 3] :c #{"d" true :e nil}))

will create such JavaScript object:

1
2
3
4
5
{
  ":c" cljs.core.PersistentHashSet,
  ":b" cljs.core.PersistentVector,
  ":a" 1
}

as you can see there are used internal types such as: cljs.core.PersistentHashSet cljs.core.PersistentVector and the ClojureScript keyword was changed to string prefixed with colon.

To solve this problem we can use clj->js function that: “Recursively transforms ClojureScript values to JavaScript. sets/vectors/lists become Arrays, Keywords and Symbol become Strings, Maps become Objects.”

1
(def js-object (clj->js  :a 1 :b [1 2 3] :c #{"d" true :e nil}))

will produce such object:

1
2
3
4
5
{
  "a": 1,
  "b": [1, 2, 3],
  "c": [null, "d", "e", true]
}

There is also one more way of producing JavaScript objects – we can use #js reader literal:

1
(def js-object #js {:a 1 :b 2})

which generates code:

1
namespace.core.js_object = {"b": (2), "a": (1)};

When working with #js you need to be cautious, because this literal also won’t transform inner structures (it’s shallow):

1
(def js-object #js {:a 1 :b [1 2 3] :c {"d" true :e nil}})

will create such object:

1
2
3
4
5
{
  "c": cljs.core.PersistentArrayMap,
  "b": cljs.core.PersistentVector,
  "a": 1
}

to solve this you need to add #js before every ClojureScript structure:

1
(def js-object #js {:a 1 :b #js [1 2 3] :c #js ["d" true :e nil]})
JavaScript object:
1
2
3
4
5
6
7
8
{
  "c": {
    "e": null,
    "d": true
  },
  "b": [1, 2, 3 ],
  "a": 1
}

Using JavaScript objects

There are situations when we need to convert JavaScript object or array into ClojureScript data structure. We can do this by using js->clj function that: “Recursively transforms JavaScript arrays into ClojureScript vectors, and JavaScript objects into ClojureScript maps. With option ‘:keywordize-keys true’ will convert object fields from strings to keywords.

1
2
3
4
5
(def my-array (js->clj (.-globalArray js/window)))
(def first-item (get my-array 0)) ;; 1

(def my-obj (js->clj (.-globalObject js/window)))
(def a (get my-obj "a")) ;; 1

as the function doc states we can use :keywordize-keys true to convert string keys of created map to keywords:

1
2
(def my-obj-2 (js->clj (.-globalObject js/window) :keywordize-keys true))
(def a-2 (:a my-obj-2)) ;; 1

Addition

If all other methods of working with JavaScript failed, there is a js* that takes a string as an argument and emits it as a JavaScript code:

1
(js* "alert('my special JS code')") ;; JS output: alert('my special JS code');

Exposing ClojureScript functions

It is worth noting that the exact form of JavaScript code generated from ClojureScript depends on compiler settings. Those settings can be defined in Leiningen project.clj file:

Part of project.clj file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
:cljsbuild {
    :builds [{:id "dev"
              :source-paths ["src"]
              :compiler {
                :main your-namespace.core
                :output-to "out/your-namespace.js"
                :output-dir "out"
                :optimizations :none
                :cache-analysis true
                :source-map true}}
             {:id "release"
              :source-paths ["src"]
              :compiler {
                :main blog-sc-testing.core
                :output-to "out-adv/your-namespace.min.js"
                :output-dir "out-adv"
                :optimizations :advanced
                :pretty-print false}}]}

As you can see above, there are two defined builds: dev and release. Please note the :optimizations parameter – for :advanced value the code will be truncated (not used code is removed) and renamed (shorter names are used).

For example, this ClojureScript code:

1
2
(defn add-numbers [a b]
  (+ a b))

will be compiled to such JavaScript code in :advanced mode:

1
function yg(a,b){return a+b}

The function name is completely “random”, so you can’t use it from JavaScript file. To be able to use defined in ClojureScript functions (with their original names) you should mark them with the :export metadata:

1
2
(defn ^:export add-numbers [a b]
  (+ a b))

The :export keyword tells compiler to export given function name to the outside world. (This is done by exportSymbol function from Google Closure Compiler – but I won’t go into the details). Then in your external JavaScript code you can invoke this function:

1
your_namespace.core.add_numbers(1,2);

Please, notice that all dashes were replaced by underscors.

Using external JavaScript libraries

The :advanced mode affects also invocation of the external libreries, because all functions/methods names are changed to minimal form. Lets take a ClojureScript code, that invokes PolarArea function form the Chart object:

1
2
3
(defn ^:export creat-chart []
  (let [ch (js/Chart.)]
    (. ch (PolarArea []))))

after compilation this code will look similar to this one:

1
function(){return(new Chart).Bc(zc)}

As you can see, the PolarArea method was changed to Bc name, which of course will cause runtime error. To prevent this, we need to tell compiler which names shouldn’t be changed. Those names should be defined in external JavaScript file (i.e. externs.js) and provided to the compiler. For our example the externs.js file should look like this one:

1
2
var Chart = {};
Chart.PolarArea = function() {};

The compiler should be informed about this file by :externs setting in project.clj file:

1
2
3
4
5
6
7
8
9
{:id "release"
              :source-paths ["src"]
              :compiler {
                         :main blog-sc-testing.core
                         :output-to "out-adv/your-namespace.min.js"
                         :output-dir "out-adv"
                         :optimizations :advanced
                         :externs ["externs.js"]
                         :pretty-print false}}

If we do all those things, created JavaScript code will contain correct invocation of PolarArea function:

1
function(){return(new Chart).PolarArea(Ec)}

To get more details about using external JavaScript libraries in ClojureScript I recommend you to read excellent blog post by Luke VanderHart about this.

As usual I’m appreciated for any comments.

Defining Node.js Task for Heroku Scheduler

For my pet project I’ve needed to write a simple application, which checks if there is any data in specific table in my database and if there is, it sends me an email. After a few minutes of research I’ve decided to use Heroku service for it. This was my first meeting with Heroku and I was curious how easy it would be to write an app.

Heroku supports Ruby, Node.js, Python and Java. From this list I feel quite comfortable with Node.js, so I’ve chosen it.

In this blog post I guide you how to create a simple Node.js app, that can be used as a Heroku Scheduler task.

Before starting I suggest reading those articles from Heroku documentation:

You also need to install Heroku Toolbelt and Node.js (with npm package manager) on your system.

Getting Started

Let’s create a project directory:

1
mkdir notyfication-sender

with an empty git repository in it:

1
2
cd notyfication-sender
git init

Now we need to tell Heroku that we created Node.js app – this should be done by creating package.json file. This file describes our application and defines all dependencies for it. To do this let’s invoke command:

1
npm init

and answer to the questions. As the result the package.json file is generated.

This is good time to do our first commit:

1
2
git add .
git commit -m "init"

First deploy

Now we are ready to create Heroku application. First we need to login to the service:

1
heroku login

then we can create the app:

1
heroku create notyfication-sender

If you want to use European server then you should add --region eu parameter to the create command.

If all is set up, let’s do a deploy by pushing all our code from the git repository to Heroku server:

1
git push heroku master

That’s it! Our first, still empty app is ready to go – except there is no code ;)

Installing add-ons

Our application will use three add-ons:

  • Heroku Postgres – for storing data and retrieving them
  • Heroku Scheduler – for running job every hour
  • SendGrid – for sending emails

we need to add them to Heroku. This can be done by invoking:

1
2
3
heroku addons:add heroku-postgresql:dev
heroku addons:add scheduler
heroku addons:add sendgrid

One important note: to install add-ons you need to verify your Heroku account by providing information about valid credit card.

Node.js dependecies

To be able to use PostgreSQL and SendGrid in our JavaScript code, we need to install npm packages for them:

1
2
npm install pg --save
npm install sendgrid --save

the --save argument adds those packages as a dependency to the package.json file – this helps installing/updating them in future.

Scheduler

You can find documentation for the scheduler here, but it lacks details about Node.js and it can take some time to figure out everything on your own.

First thing, a task should be placed in the bin folder in the root project directory:

1
mkdir bin

Second, the task should be written in a file without any extension (in our case it’s checkItems file):

1
2
cd bin
touch checkItems

The last important thing is that the first line in the script file must contain shebang that defines which interpreter program is used to run the script (here: node.js):

1
#!/usr/bin/env node

Finally we are ready to write real code!

Coding

Let’s open the checkItems file with our favorite editor. The file should contain only the shebang line.

At first we should require PostgreSQL (pg) and SendGrid modules:

1
2
3
4
5
6
7
#!/usr/bin/env node

var pg = require('pg');
var sendgrid  = require('sendgrid')(
  process.env.SENDGRID_USERNAME,
  process.env.SENDGRID_PASSWORD
);

The process.env.SENDGRID_USERNAME and process.env.SENDGRID_PASSWORD contains your SendGrid account information. Those environment variables are set by Heroku itself.

To connect to Postgres database we need to invoke code:

1
2
pg.connect(process.env.DATABASE_URL, function(err, client, done) {
});

An important notice, to be able to use DATABASE_URL variable you need to promote your database. First we need to establish the exact URL of our database, to do it we need to execute command in the root folder:

1
heroku pg:info

It lists all available databases for you program, with values of URLs. The output should be similar to this:

1
2
3
4
5
6
7
8
9
10
11
=== HEROKU_POSTGRESQL_RED_URL
Plan:        Dev
Status:      available
Connections: 0
PG Version:  9.3.2
Created:     2014-02-06 18:37 UTC
Data Size:   6.4 MB
Tables:      0
Rows:        0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback:    Unsupported

now we can execute command:

1
heroku pg:promote HEROKU_POSTGRESQL_RED_URL

which sets DATABASE_URL variable to the value of HEROKU_POSTGRESQL_RED_URL.

I won’t describe how to create tables and import data into them, you can read about this here.

Let’s return to pg module. There is one important thing to remember. When you finish your work with database, you have to invoke done() callback – otherwise the client will never be returned to the connection pool and you will leak clients.

Before writing a query, lets write a function for error handling. We can use for that the code from pg documentation:

1
2
3
4
5
6
7
8
pg.connect(process.env.DATABASE_URL, function(err, client, done) {
  var handleError = function(err) {
    if(!err) return false;
    done(client);
    next(err);
    return true;
  };
});

To query a table (I assume that there is todos table) in a database we can write such code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pg.connect(process.env.DATABASE_URL, function(err, client, done) {
  var handleError = function(err) {
    if(!err) return false;
    done(client);
    next(err);
    return true;
  };

  client.query('SELECT * FROM todos', function(err, result) {
    if(handleError(err, client, done)) return;

    if (result.rows.length > 0) {
      //send email

      done();
      pg.end();
    }
  });
});

The idea here is to send an notification email only if there are any rows in todos table. Please pay attention that we invoke done() method when the query is done. I also invoke pg.end(); to immediately close any connections to PostgreSQL server – I do this to save dynos and close the app as fast as possible.

The last part is to write code that will send email with SendGrid module:

1
2
3
4
5
6
7
8
9
10
sendgrid.send({
    to: 'my@email.com',
    from: 'app@email.com',
    subject: 'There are some items to do',
    text: 'You have items to do'
  }, function(err, json) {
    if (err) {
      console.error(err);
    }
});

so the whole code looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/usr/bin/env node

var pg = require('pg');
var sendgrid  = require('sendgrid')(
  process.env.SENDGRID_USERNAME,
  process.env.SENDGRID_PASSWORD
);

pg.connect(process.env.DATABASE_URL, function(err, client, done) {
  var handleError = function(err) {
    if(!err) return false;
    done(client);
    next(err);
    return true;
  };

  client.query('SELECT * FROM todos', function(err, result) {
    if(handleError(err, client, done)) return;

    if (result.rows.length > 0) {
      sendgrid.send({
          to: 'my@email.com',
          from: 'app@email.com',
          subject: 'There are some items to do',
          text: 'You have items to do'
        }, function(err, json) {
          if (err) {
            console.error(err);
          }

          done();
          pg.end();
      });
    }
  });
});

Please notice, that I’ve moved done(); pg.end(); code to the callback of send method.

To run and test the code we should deploy it on the server and run:

1
2
3
4
git add .
git commit -m "Code for scheduler task"
git push heroku master
heroku run checkItems

If everything is OK, the code should run without any errors.

This is very simple code, that illustrate only the way of doing such task. For a production ready it should be extended and more tested.

Now when we have a code of our scheduler task, we can set scheduler on Heroku site.

Setting Heroku scheduler

To configure scheduler we need to go to its dashboard page by invoking command:

1
heroku addons:open scheduler

Heroku Scheduler page

On the page click Add Job... link, in the text box write the name of the file (without any extension) that defines the task and is located in the bin folder (in our case it is: checkItems). From the drop-down list select the frequency and adjust next run time. Commit you changes by clicking the save button.

Heroku Scheduler page

This is it, you defined your scheduler task. From now on it will run every defined period.

Heroku Scheduler page

I hope this article will help you create your own, custom task for Heroku scheduler. Enjoy!

You can clone this code from my GitHub repository

NDepend 5

I’ve written about NDepend on my blog in August and since then the new version (5) has been released. This is a major update, with many changes and improvements. In this text I focus on parts that are most interesting for me.

Trends

When we work with the code, we introduce changes, which affect whole code base. We can increase or decrease number of lines of code, complexity of the methods, cohesion of methods and many more. All of those informations can be calculated and shown by NDepend. Those statistics show only the condition of the project in the current point of time. It would be very helpful to know how those metrics change in time. What was the complexity two weeks age? How many types did we have one month ago? If we could compare current values with the historical ones we could say if our changes are good for the project, if the quality of the code is increasing. In essence the ‘Trends’ feature give us such possibilities.

Trend chart

(The image is from NDepend site, because I didn’t have enough data to present a nice chart)

NDepend has now the ability to store it’s own analyzed data and base on it computes charts. On those charts we can see changes of various code metrics in time. The longer we work with NDepend (by default the trends log are calculated once a day) the more accurate our charts are.

By default the tool comes with a set of predefined trend queries. As usual, with NDepend, you can write your own queries (using CQLinq) and use them.

Trend queries

The trends became my favorite, new, feature of NDepend. I can say, that I’m a bit addicted with them. At least once a day I like to spend a few minutes on analyzing them. This gives me a good overview of a whole project.

Dashboard

It is always nice to have one, central, place where you can look and see the most important things. NDepend5 introduces such a place in the form of a ‘Dashboard’ view.

The dashboard contains predefined section with basic statistics and code rules. In addition we can change the view by adding various trends charts to it. It is worth mentioning that every chart view can be fully customized.

The dashboard is a nice starting point for using NDepend. For the new users it can also lower the learning curve, giving the advices which features are the most important and should be checked.

Dashboard

New look and feel

The user interface has been completely redesigned in the spirit of ‘flat’ design principles. It uses pastel colors on white backgrounds, clear and easy to read fonts. The GUI is now coherent with Visual Studio 2012/2013 style. When you work with NDepend5 you feel that it’s part of Visual Studio and not a separate add-in.

Still there is a small place to improvement in ‘Metrics’ and ‘Matrix’ views, which, in my opinion, stand out from the rest of the design (maybe it’s because of the colors and the textures?).

Overall I’m pleased with those changes.

NDepend report

Installing Clojure on Nitrous.IO Platform

I’ve evaluated Nitrous.IO service for a few weeks now. In a nutshell it allows you to create virtual, development, environment, to which you can connect remotely (using the terminal, but also the web page, chrome app or even Mac app). By default Nitrous.IO comes with preconfigured boxes with Ruby/Rails, Node.js, Python/Django and Go.

I’ve started using Nitrous, because I wanted to have an easy and fast access to programming languages without installing them on my Windows machine.

Lately I try to get my head around Clojure language (I’ve learned functional programming during my studies, but it was a long time ago, so I have to discover it again). Because of that I wanted to install Clojure on the Nitrous.IO platform. It isn’t hard, but requires few steps to do. Below I will show you, how you can do it (this tutorial is inspired by ‘Installing Erlang’ guide).

Let’s start:

  • Create a new development box (I’ve used the one with the Go language, but it doesn’t matter) or use the existing one.

  • Connect to the box – you can even use the web page – start terminal then.

  • The easiest way to install Clojure on any system is to use Leiningen tool. First lets make a folder where we will store installation script:

1
mkdir ~/.tools

then we should get the script:

1
2
cd .tools
wget https://raw.github.com/technomancy/leiningen/stable/bin/lein

  • Once the script is downloaded, we need to modify ~/.bashrc file to add the .tools directory to our $PATH – we can do it with the vim:
1
vim ~/.bashrc

add this line to the end of the file:

1
PATH=$PATH:$HOME/.tools/

save and exit.

  • Reload the ~/.bashrc file:
1
source ~/.bashrc

and check if the $PATH contains .tool directory

1
echo $PATH

the output should be similar to this one:

1
2
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/action/.gem/ruby/1.9.1/bin:/home/action/.go/bin:/home/action/workspace/bin:/home/action/.google_appengin
e:/home/action/.go/bin:/home/action/workspace/bin:/home/action/.google_appengine:/home/action/.tools/
  • Now we need to set Leiningen script to be executable
1
chmod a+x ~/.tools/lein

and we are ready to go.

  • To start using Clojure REPL (read-eval-print loop) type:
1
lein repl

For the first time Leiningen will download and install Clojure, after it finishes you can start playing with the new language.

If you need more information about Leiningen, you can read the documentation.

A small tip if you work in the web page terminal, you can copy or past using those shortcuts:

Windows Users:
Copy: Shift + Ctrl + c
Paste: Shift + Ctrl + v

Mac Users:
Copy: Command + c
Paste: Command + v

DevDay 2013

Last Friday (20th of September 2013) I was at the DevDay conference in Krakow, Poland. This was a free event, but – in my humble opinion – it could easily compete with the paid ones and it would probably won.

First of all, the conference organizers invited well known and good speakers. Second there was superior service (lots of good food and drinks) and a crew willing to help (cheers for the debugging team!). Ending on the enthusiastic participants.

The conference started at 8am and took whole day. Expect the first and the last talk, two sessions were taking place simultaneously.

I’ve attended to those:

  • “Back to basics: the mess we’ve made of our fundamental data types” by Jon Skeet

A funny presentation about how even simple things in programming (like float, string, date and time types) can be complex when you use them without understanding. Additional benefit: you could see how Jon spoke with “Tony the pony” ;)

  • “Implementing Continuous Delivery” by Patrick Kua

An introduction to the continuous delivery subject – what it is, how to use it and what are the benefits. I had some basics about CD, so it was interesting for me.

One of the creators of the Nancy framework talked about how simple and clever design “tricks” can encourage developers to use API their created. This talk gave me few ideas and I will definitely use them in my code.

  • “Building Startups and Minimum Viable Products” by Ben Hall

If you think about a startup, then this talk is for you. Simplifying the essence, you should focus on creating product fast and evaluating it in real world – waiting one year (or even more) with the release isn’t a good idea. I’ve listened this talk with the enjoyment.

  • “Full-text search with Lucene and neat things you can do with it” by Itamar Syn-Hershko

I’m Lucene user at work and I’ve also learned about searching during my studies, the first part of the talk was not so interesting for me, but the second part about Elastic Search makes that I’m very happy that I attended to this session. Itamar is one of the Apache Lucene contributor and a good speaker, so I recommend watching this talk.

An overview about architecture of a site, that every developer probably know (and if not, then he should!). You could learn some interesting facts about the team and the way they create this great website.

  • “The software journeyman’s guide to being homeless and jobless.” by Rob Ashton

An inspiring, funny and probably too short talk about Rob’s one year journey during which he met people, coded and drank ;) I have to say that Rob has lot’s of charisma and is a very good speaker, so you listen him with the real pleasure. Be aware: if you’re Belgian then this talk probably isn’t for you ;)


I’m very happy that I was invited for this conference and I hope I will be there in the next year. One more time thank you guys for creating such a great event!

Dev Day 2013 Jon Skeet Patrick Kua Rob Ashton

NDepend

As I wrote previously I’m a big fan of tools and programs that helps me in my day-to-day work. I also like to test new programs, which is probably the reason that Patrick Smacchia contacted me with a proposition to evaluate his program: NDepend.

In simple words NDepend is a tool that helps improve .Net code quality, by measuring and presenting information about code metrics. It supports C# and Visual Basic languages and integrates nicely with Visual Studio (from 2008 to 2012), providing an easy usage for users.

Functionality

The list of features that NDepend provides is quite big. From my perspective the most interesting parts are:

  • Metrics
  • Queries and Rules Explorer/Editor
  • Dependency Matrix/Graph

in addition there are also modules for:

  • Code comparison
  • Test coverage
  • Searching

In this blog post I will focus only on the main functionality.

Metrics

In our life we can describe objects by simply telling about their properties (like mass, dimension, speed and so on). Using those properties we can compare objects and evaluate which of them is better (for example: which car is faster). In science and engineering we use predefined system of measurement to define item attributes, where every attribute is expressed in its own unit of measure (like kilogram, meter…).

The same approach is introduced in computer science – for the last 30 years, or even more, researchers introduce many metrics that can describe software and code. Those metrics can tell us:

  • how hard is to introduce changes
    • nesting Depth
    • afferent coupling
    • efferent coupling
  • how large and complex the code is
    • number of lines of code
    • cyclomatic complexity
  • how good the code is documented
    • lines of comment

… naming only few from the big list.

NDepend offers 82 most popular code metrics that can be calculated and shown.

Because of hierarchical nature of code elements (fields and methods are in type, type is in namespace, which in in assembly) the metrics are shown using treemapping technique. Depending of selected scope (level) code element is represented by rectangle, which size is determined by the code metric. The rectangle can contain other, smaller rectangles (i.e. namespace has methods). This allows to easily spot problems and patterns. For example if we select a ‘method’ as our scope and ‘Lines of code’ as a metric, the big rectangles will indicate the methods with many lines of code (usually this isn’t good and those methods should be split to smaller ones). The metrics view is one of the most unique and helpful parts of NDepend.

Treemap

I should mention that some (not many) metrics can’t be calculated and shown for code written in Visual Basic, so if you use VB please be warned.

Queries and Rules Explorer

In essence NDepend provides mechanism for querying your code base for various code metrics and problems that occurs in it. To achieve it, it defines CQLinq (Code Query over LINQ) language, that should be familiar for every .Net developer. For example to find all methods that have more then 30 lines of code we could write such a query:

1
2
from m in JustMyCode.Methods where
   m.NbLinesOfCode > 30

By default NDepend comes with big number of predefine queries which cover such topics as:

  • Code quality
  • Object oriented design
  • Architecture
  • Dead code
  • Naming conventions

and more…

Those queries help spot the places in code that should be improved: methods split into smaller ones, types renamed, complexity reduced and so on. Working with NDepend creates a workflow, where you look for the problem, fix it and then check again if the metrics are improved. After some time your code should be cleaner and easier to maintain.

The nice part about those queries is that you can modify them and even write your own. This gives you full flexibility in adjustment the tool to your needs.

Query editor

Dependency Graph

When the application comes bigger and bigger it is harder to see the big picture of it. We lost idea about dependences in it. The easiest way to see them is to paint a graph, that will reveal all the dependencies. Here comes NDepend, it can create such a graph for us. The graph can have many levels, that shows dependences between assemblies, namespaces, types or even members (methods and fields).

Dependency Graph

When we work with NDepend, in Visual Studio, we can use it to get various information about the code elements (types, fields, methods, namespaces or assemblies) in the solution. We can query about direct and indirect callers/callees, inheritors, implementers, type usage and so on. All such queries can be shown in a graph, which is nice and helpful addition.

There is only one problem with the graph representation, it becomes blurry, when there is too much objects in it. I had such problem when I used NDepend with large and quite legacy project – the graph was so big that I wasn’t able to read it. I even ask Patrick what should I do, to be able to work with the graph in this project. He pointed me that I should use dependency matrix instead.

Dependency Matrix

As you probably know a graph can be represented as adjacency matrix. This idea was used by NDepend to present a solution dependency graph as a matrix. The main benefit of such representation is that it is more compact and clear to read. This is especially important when you have many assemblies in your solution – in such case graph is too big to be easily read.

Dependency Graph

At first I didn’t feel comfortable with dependency matrix and it took me a while to be able to read it correctly. The context-sensitive help speeds the process of learning and after few days the matrix was completely natural for me.

There is also one, big, advantage of dependency matrix over the graph. The matrix allows you to spot the structural patterns, which are nice described in the documentation – that I highly recommend you to read.

Visual Studio integration

By default NDepend comes as a standalone application called Visual NDepend, in addition you can integrate it with Visual Studio. After installing plug-in, you see new menu in the main bar, new context menu in Solution Explorer and few other items here and there. From the menu you have access to all features of NDepend, you can start:

  • Class Browser
  • Search
  • Queries and Rule Explorer/Editor
  • Dependency Matrix and Graph
  • Metrics
  • Analysis
  • Compare
  • Coverage by Tests

and few others.

To be able to use NDepend in Visual Studio we need to create new project file and add to it all assemblies, that we want to analyze. The project file can be added to the solution file, but this isn’t required. After the analyze is finished the NDepend is ready to go. It is worth mentioning that an HTML report, that summarize all information about your code, is generated after the analyze ends.

The integration is nice and gives impression of well organized. For sure this is very strong point of this tool.

Summary

NDepend is very interesting product. It offers wide range of features focused on improving the quality of the code you and your team write.

There is also another, very useful, usage of NDepend that I would like to mention. Sometimes in my work I get a source code from completely unknown project and I need to estimate how much work is to introduce changes to it. In such case I start NDepend, run queries and look at the metrics – if those aren’t bad I can assume that changing this code won’t be so hard. Of course this isn’t always true, but if you need to do the time estimation, everything can be helpful.

Note: I’ve used source code from Nancy project in the presented screenshots

Making Mechanical Keyboard Less Loud

The thing, I’ve complained about in my review of the Tesoro Durandal G1N keyboard was the noise. It was especially disturbing late night, when I was still working on my PC and my wife wanted to sleep ;). To solve this problem I’ve decided to reduce the noise of my keyboard by applying sound dampeners – a special soft rubber O-rings that are installed on every keycap stem.

O-rings

After some searching I’ve found that good quality O-rings can be bought from the WASD Keyboards company. This is an American store, but the shipping cost, even for European clients, is quite low – sending it by ‘USPS First-Class International’ (normal envelope) costs around $7 and it takes about 10 days to ship.

As for today the shop offers two types of O-rings:
40A-L (0.2mm Reduction) Red
40A-R (0.4mm Reduction) Blue

40A-L (0.2mm Reduction) Red 40A-L (0.2mm Reduction) Red

I’ve bought the 40A-L red ones, because they reduce the noise of the keyboard with only minimal impact on the writing experience. Right now I’ve been using them for more than a month and I’m really happy with the effect. When I hit the key the sound isn’t so “plastic” as before and it’s much more deaden. Still I can easily feel the moment when the key switch has been activated.

Unfortunately I wasn’t able to test the blue one (40A-R – 0.4mm reduction), so I can’t write about the differences between them.

Installation

O-rings installation is a bit painful, because you have to remove every keycap, put the O-ring into the stem and then put it back into keyboard. To see how to do it properly I recommend you to watch this short movie created by WASD company. It took me about one hour to install them on all stems. To pull out the keycap I’ve used two thin wires, that I put on the opposite sides of the keycap and then pull them up. After 10 keys you can do it without thinking ;)

Final words

If you like your mechanical keyboard, but you think it’s too loud, you should use those O-rings and probably you will be happy with the result – at least I’m.

40A-L (0.2mm Reduction) Red 40A-L (0.2mm Reduction) Red

Tesoro Durandal G1N Mechanical Keyboard Review

A few weeks ago I’ve decided to change my keyboard. For the last two or three years I’ve been using Microsoft Wired Keyboard 600, which is nice, low profile keyboard. During my research for the new keyboard I’ve found many articles about mechanical keyboards and how good they are in the comparison to “traditional rubber domes” ones. This convinced me to buy one. Still there was a question about exact model. So again I’ve spent some time reading materials about key switches and their purpose (I can recommend this thread – it contains lots of useful information). The final decision was to buy keyboard with Cherry MX Brown switches.

Unfortunately in Europe and especially in Poland there is problem with getting mechanical keyboard, not saying about the possibility of choosing the type of keys switches. You can buy some keyboards with Cherry MX Redswitches, because those are used in the “gaming keyboards” and therefore more popular. Of course I could import a keyboard from the USA, but then the price would be too big for me – the shipping cost is pretty big, plus I would need to pay duty and value added tax. All of this makes that I bought Tesoro (in the United States it’s Max Keyboard) Durandal G1N mechanical keyboard, which has just shown on the European and Polish market.

Quality

The keyboard has simple, US international layout. There is one additional key – Fn which allows using multimedia functions keys that are mapped to keys from F1 to F6 (mute, change volume, play, pause, rewind). The keyboard looks almost like the “normal” one, except the right, upper corner, which is a bit bigger and has lighting sign of Tesoro company. You can also see the name of the brand on the bottom of the keyboard, just beneath the space bar. The maximal dimension of the keyboard is 46 cm (18.1 in) length and 17 cm (6.7 in) width. The front case imitate brushed metal, which looks good and prevents leaving fingerprints. On the back of the case there are rubberized elements that prevents slipping keyboard on the table. The keyboard itself is heavy and made from good quality of plastic. It has braided cable, which is also a nice addition. In summary I can say, that I’m pleased with the quality of my new keyboard.

Writing experience

This is my first mechanical keyboard and I have to admit that writing on it is a real pleasure. You can easily feel the moment when the key switch has been activated (for the MX Brown switch it’s in the half way through the key press) and you can release it. This allows you to use less force to write something on the keyboard. The result is that you can write faster and your hands are less exhausted. Unfortunately I haven’t used any other mechanical keyboard so I can’t compare this one with others – maybe in the future I will be able to do this ;)

I’ve decided to buy keyboard with the brown switches because they are advertiser as quieter than the blue ones (for the blue switches you can hear the ‘click’ sound when you press the key), but still good for typing. Despite of it I have to admit that this keyboard is quite loud. For me this isn’t a problem, but for my wife it is ;) She complains about the noise, so I have to close the door to my room when I’m typing a lot of text or playing any game. To solve this problem I’m going to buy the rubber o-ring switch dampeners to reduce the noise. When I do this I will try to write something about it on my blog.

Conclusion

If you are looking for good and not expensive keyboard for writing and occasionally playing games I can recommend you the Tesoro Durandal G1N keyboard. I’ve been using it for more than a month and I’m very happy with it. My wife apparently not …

Tesoro Durandal G1N Tesoro Durandal G1N Tesoro Durandal G1N Cherry MX Brown switches