Elm Native basics

Posted by on Jan 23, 2017 Comments Improve this article

Beware

Before diving on the topic, don’t forget that Native, in Elm, means writing JavaScript code called directly from Elm code. This has nothing to do with iOS or Android or stuff like React Native.

Be super careful. The promise that Elm has no runtime error is broken the moment you start writing native code. If your native JavaScript code throws an error, it will crash your program. It’s up to you to ensure that it cannot happen, just like Elm does. try/catch is your best friend if you have any doubt.

A simple and useless example

Let’s start with simple things. Let’s pretend for a brief moment that Elm does not have a function nor an operator to add integers. That kind of suck because you don’t want to write ports to achieve that, it should be synchronous, right? Your only solution is to write a native JavaScript function just like this one:

function add(a, b) {
  return a + b;
}

That was easy! Now, how do we plug it into our Elm code? First, we will need to wrap it inside a specific code to ensure that it will fit on the resulting JavaScript file after Elm compilation. I will talk about the syntax of this file on another blog post so, for now, just trust me on the syntax.

var _username$project$Native_Utils = function () {
  function add(a, b) {
    return a + b;
  }

  return {
    add: F2(add)
  };
}()

What happened? Our code was so nice and now it’s nearly nonsense… Actually, it’s not that hard. Nearly all functions and constants in Elm will be assigned to a JavaScript variable in the compiled file. The name syntax is: _[username]$[project name]$[module name]. username and project name are both coming from the repository field inside your elm-package.json, most of the time it will be your GitHub username and the name of your project. If you don’t have this field, default values username and project will be used. So don’t forget to edit them to match your project.

Then we create an IIFE (aka a function that automatically called itself). That’s because we want to scope all our variables and functions. The returned value is an object exposing the API of our native module. Right now, we only have one function exposed but we could have way more.

Finally, because all functions in Elm are curried, we need to apply a small Elm helper to achieve that. You have several of them, from F2 to F9, where the number match the number of arguments of your function. Since add has 2 arguments, we wrap it inside F2. There is no need to wrap functions with zero or one argument. We could also have directly wrote a curried function just like that:

function add(a) {
  return function (b) {
    return a + b;
  };
}

But the F2 helper is easier.

Now, let’s save our code inside the Native/Utils.js file. That’s because we named our module Native_Utils (see the first line of the previous JavaScript code), and underscore means subfolder. The Native part is mandatory but you can rename the Utils part as you want. Just be sure to also change the variable name at the beginning of the native code. We can now import it inside our Elm code and use it.

-- This is a normal Elm module
-- We are using the same Utils name but you don't have to
module Utils exposing (add)

-- Importing the native code
-- Beware, your code will compile even if you don't import it
-- but will crash at runtime
import Native.Utils

-- Creating an Elm function based on our native function
-- Just like before, you don't have to use the same name
-- but it's easier this way
add: Int -> Int -> Int
add =
  Native.Utils.add

This is a 200% danger zone. The Elm compiler is mostly like “Oh, you are using native? Ok, I will fully trust you on what you are doing but don’t complain if the world ends when your program crashes.” It means that the compiler will not perform any checks on types on any native function. For example, we could write:

module Utils exposing (addInt, addFloat)

import Native.Utils

addInt: Int -> Int -> Int
addInt =
  Native.Utils.add

addFloat: Float -> Float -> Float
addFloat =
  Native.Utils.add

And it will work just fine. The compiler will not try to do any type inference on native code. It’s totally up to you to make sure it will actually produce the expected result at runtime.

Protip This is why you should always add an Elm signature to your native functions by writing a corresponding Elm function which just call it directly. Just like we did, feel free to create several Elm functions with different signatures if your native function can handle them just fine. But never write code mixing standard Elm code with a native call in the middle of it, it will be a pain to understand its signature, it will be hard to debug and it will be easier to break at runtime.

That’s it! You can now use your Utils module inside your Elm project.

module Main exposing (..)

import Utils

type alias Model = Int

init: Model
init = Utils.add 40 2

Let’s go crazy

Obviously, you can write code that bends the limitations of Elm. All Elm functions are pure and without any side-effect. Among other things, it means that given the same arguments, a function will always return the same result. That’s why Math.random is a Task, because it isn’t pure. Using Native, you can make it synchronous. We will see the example just below but never do that in your project, I just want to show you that it is possible.

// File: Native/HackMath.js
var _username$project$Native_HackMath = function () {
  return {
    random: Math.random
  }
}()
-- File: HackMath.elm
module HackMath exposing (random)

import Native.HackMath

random: () -> Float
random =
  Native.HackMath.random
-- File: Main.elm
module Main exposing (..)

import HackMath

type alias Model = Float

init: Model
init =
  HackMath.random() -- YOLO

Tasks

Tasks are a huge part of any Elm program so it’s very likely that at some point, you will need to create them inside Native code. This is super useful for wrapping async Node functions. We will use other Elm helpers to achieve that. The main one is _elm_lang$core$Native_Scheduler which is responsible for creating and finishing, either with a success or a failure, any tasks inside Native code.

First, we will call the nativeBinding method of the scheduler. This will actually create an Elm task. But we need to give it a function as its first and only argument. This function will take a callback that you should call when your task is finished. This is how you can handle asynchronous tasks. You must wrap the result inside either the succeed or fail methods of the scheduler.

Remember that Tasks are not only for asynchronous code. Any non-pure function, like Math.random or Date.now, should also be a Task.

// File: Native/Utils.js
var _username$project$Native_Utils = function () {
  var scheduler = _elm_lang$core$Native_Scheduler

  var now = scheduler.nativeBinding(function (callback) {
    callback(scheduler.succeed(new Date()))
  })

  function later(time, value) {
    return scheduler.nativeBinding(function (callback) {
      if (time < 0) {
        // We don't need to check that time is a number because
        // we will specify that it must be an Int in the Elm signature
        callback(scheduler.fail('First argument must be a positive integer'))
      } else {
        setTimeout(function () {
          callback(scheduler.succeed(value))
        }, time)
      }
    })
  }

  return {
    now: now,
    later: F2(later)
  }
}()
-- File: Utils.elm
module Utils exposing (now, later)

import Native.Utils
import Date exposing (Date)
import Task exposing (Task)

-- We can use the `Never` type because we are never
-- calling `scheduler.fail` inside our Native code
now: Task Never Date
now =
  Native.Utils.now

later: Int -> a -> Task String a
later =
  Native.Utils.later

Did you notice that now is not a function? You don’t have to expose only functions in the returned API from a Native module, you can put whatever you want in it as long as you correctly use it inside your Elm code. Here, we directly create the Task which is a wrapper around our actual code. It will be called using Task.perform or Task.attempt later inside our program.

The end

This is it for the basics. I will try to write another post about the syntax of the JavaScript compiled file next and about more advanced stuff.

WTF npm update?!

Posted by on Jun 07, 2015 Comments Improve this article

tl;dr

Do not use npm update with any package which use custom dist-tags.

Dist-tags

This package has officially those 3 last versions: 1.0.0, 1.0.1 and 1.0.2. But inside the dist-tags (read more), we have two of them: the classic latest which refers to the latest stable release of the package and a custom one, named canary, indicating the last non-stable release of the package. If you are wondering if real projects are using such tags, the answer is yes, the npm package is using latest and next tags for it’s weekly pre-release (read more).

Currently, latest points to 1.0.1 and canary to 1.0.2. Meaning that if you run npm view test-npm-update, you will have something like:

{
  "name": "test-npm-update",
  "description": "...",
  "dist-tags": { "latest": "1.0.1", "canary": "1.0.2" },
  "versions": ["1.0.0", "1.0.1", "1.0.2"],
  "version": "1.0.1"
}

Versions

It’s important to realize a few things here. 1.0.2 is a released version just as 1.0.0 and 1.0.1 and we will say it’s the greatest one (as in the biggest number according to semver) but not the latest one (as in the one tagged with the latest dist-tag). We need to make such distinction to fully understand what will happen after that.

So, when you run npm view test-npm-update, it actually runs npm view test-npm-update@latest, meaning it will grab the informations of the latest version. But maybe some other versions have been released with a custom tag after this one. For me, so far, so good. NPM is doing exactly what I would expect. If I want a custom release such as the canary one, I can run npm view test-npm-update@canary and it will display infos about the 1.0.2 version. In fact, but I might be wrong but I except NPM to always use the latest version (aka the latest dist-tag) by default if I don’t specify anything. That’s what you can mostly read all over the NPM documentation.

But remember, 1.0.2 is inside the versions array just like any other version. So first warning, if you use such metadata for whatever stuff you are doing, do not assume that the greatest version inside the versions array is the latest one.

Install

Now, what if I run npm install test-npm-update? What would you expect to be installed? 1.0.1 right? And of course it will be this version, the latest one. That’s normal, after all, latest is the default one. All good here.

What if I clean my folder and then run npm install test-npm-update@^1.0.0? Guess what, 1.0.1 will be installed. And I’m totally ok with that. I asked for the best 1.x.x version and I’m glad to have the latest one since it matches.

package.json

But most of the time, you don’t install or update from command line, you have a package.json file with a range inside it. Let’s say we have the following one:

{
  "name": "awesome-project",
  "version": "0.0.0",
  "dependencies": {
    "test-npm-update": "^1.0.0"
  }
}

Pretty classic, right? Now, for the purpose of the demo, let’s say we currently have the 1.0.0 version of test-npm-update locally installed. If you want to reproduce, just create an empty folder, then create a package.json inside it with the previous content and run npm install test-npm-update@1.0.0 to force the install of an old version.

Done? Cool, let’s move forward. NPM has a command to test if you have outdated versions locally installed. Which is our case. Let’s check that by running npm outdated. You should have something like:

Package          Current  Wanted  Latest  Location
test-npm-update  1.0.0    1.0.2   1.0.1

Wait a minute? I’m ok with current (the locally installed) being 1.0.0 and latest (matching the dist-tag) being 1.0.1 but wanted is supposed to be the best matching version I should install according to package.json. How can it be greater than latest?

Actually, it’s all ok according to the NPM documentation. After all, the package.json range is ^1.0.0 which means the greatest possible version without changing the first non-zero digit. And among all our versions (see the versions array from npm view), both 1.0.1 and 1.0.2 match this range, but since 1.0.2 is greater than 1.0.1, the wanted version is 1.0.2.

I didn’t expect that to be honest. That’s not wrong but I can’t help myself finding that strange.

Install again

Quick mention to the fact that if I run npm install with my package.json in an empty folder (aka without the 1.0.0 version already installed), it will still install 1.0.1 version. That’s ok according to latest being the default one. Back to our outdated 1.0.0 version.

Update

Things start to get really ugly now. So, npm outdated just told me I have an old local version. I should probably update it, and NPM has a command for that. Let’s run npm update. To be honest, I wasn’t sure anymore what would be installed locally. I mean, I would have normally expected the 1.0.1 version. My brain was like “It should be the greatest stable version which match the range”, with stable meaning lower or equal to the latest tag, but for NPM, it’s more like “It should be the greatest version which match the range. Period.”. And it makes all the difference. My brain stops at 1.0.1 as the latest stable but NPM browse all version, including any custom dist-tags, including the canary version.

At the end, running npm update will install 1.0.2 version. This is wrong. According to documentation:

This command will update all the packages listed to the latest version (specified by the tag config), respecting semver.

I read that as the latest version according to latest dist-tag. But we just updated to a version beyond this latest version. In any case, this is super dangerous! It means you can update to non-stable versions without even noticing it.

What if we didn’t have the 1.0.0 already installed? Since npm update also install missing packages, it will indeed install test-npm-update according to package.json and, of course, to the 1.0.2 version.

Conclusion

IMHO, I think this is way too dangerous, npm update should be capped by the latest version, and so should npm outdated. By default, no command should target versions beyond latest dist-tag. Also, it seems inconsistent to have install and update both capable of installing a missing package from a package.json file but not to the same version.

I raised an issue on Github, we will see. Be careful from now on.

Thanks for reading! Spread the word.

Personal ad

It might be a bit too early to speak about that, but if you need an outdated command which is actually capped by the latest tag and also support other package managers (like Bower), please check my outdated project. It’s not ready at all yet but it will be in the next few days, promise.

BrowserSync proxy in top of a Play Framework server

Posted by on Aug 05, 2014 Comments Improve this article

TL;DR ‘coz I’m a h4k3r

Full source code of the demo is on GitHub with a nice and simple README on how to bootstrap the project.

What are you talking about dude?

Play Framework is an HTTP server written in Scala. It is super cool and you should probably give it a try. Let’s say you are using it, you really enjoy how productive it is, only having to refresh your browser to see all your code modifications, but you want to go further, you want live-reloading! There are several tools doing that.

Introducing BrowserSync. It provides 2 killers features. One is live-reloading: it can monitor resources, and when it detects some modifications, it will reload the browser page to load them. Even better, if possible, it will hot deploy them, meaning they will be loaded without reloading the page. That’s the case for CSS files which are loaded and the page repainted to display the new design without any refresh. The second one is to keep synchronized several browsers across devices. And we are not talking about resources only but about all user actions. If you scroll on one browser, it will scroll on all connected browsers, same for clicking on a button, and so on…

Let me say it again with a concrete example. You are on your computer, one screen with your source code, another one with Chrome opened and a last one with Firefox. You also have an iPad tablet and an Android smartphone connected to your computer. All of those have your application main page opened. Each time your edit your code, they will all reload and display the new result (hot deploy in case of CSS). When you are ready to test interactions, you just go to, let’s say, your Chrome browser and start scrolling and clicking. All your screens will start moving and doing the actions everywhere. Talk about increasing productivity!!

What about assets that require compilation? Such as SCSS, LESS, Stylus, CoffeeScript, etc… No biggy bro (or sis), we have you covered. In this demo, we will use Gulp to monitor those resources, compile them, and pass them to BrowserSync for live-reloading (obviously not losing the hot deploy if possible). You can freely use whatever build tool you want, be it Grunt, Broccoli, …

Magic proxy

BrowserSync could be used as the web server, that’s the easiest way to use it, and I’m actually doing it when my Play server is only here to provide a REST API. Consider doing that when you don’t need Scala templates or when you cannot use them (for example if you want to embed all your standalone HTML files inside a Cordova / PhoneGap application).

But here, we want to use Play as the web server, meaning it will be responsible to serve all our assets: HTML, CSS and JavaScript. For that, we will need to use the proxy feature of BrowserSync. Long story short, it will start another web server, on its own port, and will display the exact content from the Play server but adding a bit of JavaScript magic in it so it can enable all its features. It is just like opening your Play application, just on a different port. By default, you would use the port 9000 for Play, in this demo, we will set the proxy on port 9001 (it would have been around 3000 by default, but I didn’t want anyone to freak out so I put it as close as possible to Play default one).

We also want to live-reload some of our resources. I kept it simple for this demo, but you are free to add as many as your want. We will use the files property of BrowserSync configuration and set and array of files to monitor (it supports wildcards). Here, we are monitoring all CSS files from public/stylesheets, JavaScript files from public/javascripts and HTML files from app/views. There are two important points to notice here. First, we are referencing files based on their path on the source code, not on their actual url. For example, public/stylesheets/main.css will be served by Play as assets/stylesheets/main.css, but BrowserSync only needs to know the real path, after that, you can map it to whatever url you want. Second, we are monitoring Scala templates, and it’s fine because when BrowserSync will detect a modification, it will reload the page, and Play will re-compile the template before serving it, so it will display the new version. It works just fine with both run and ~run , the latest being faster.

Here is a super small configuration for BrowserSync to enable such a proxy. You can read the online documentation to extend it.

var browserSync = require('browser-sync');

browserSync({
  // By default, Play is listening on port 9000
  proxy: 'localhost:9000',
  // We will set BrowserSync on the port 9001
  port: 9001,
  // Reload all assets
  // Important: you need to specify the path on your source code
  // not the path on the url
  files: ['public/stylesheets/*.css', 'public/javascripts/*.js', 'app/views/*.html'],
  open: false
});

What about compiled assets?

Right… right… there are assets that you cannot serve directly, you need to preprocess them. The easiest way to do that is using a build tool, just pick one among the best ones (Gulp, Grunt, Broccoli, Brunch, …) and enjoy. There is a really simple separation of concern here: the build tool only manage assets that are not served directly (LESS, CoffeeScript, …) and compile them into assets that are actually handled by BrowserSync (CSS, JavaScript, images, …) which then live-reload them.

It’s so easy that it’s nearly frustrating. In the demo, the main.css file is generated from main.less. I have two Gulp task to handle that: one for compilation and one to monitor any modification.

var gulp        = require('gulp');
var less        = require('gulp-less');

// Registering a 'less' task that just compile our LESS files to CSS
gulp.task('less', function() {
  return gulp.src('./resources/less/main.less')
    .pipe(less())
    .pipe(gulp.dest('./public/stylesheets'));
});

// Let's watch our LESS files and compile them at each modification
gulp.task('watch', function () {
  gulp.watch(['./resources/less/*.less'], ['less']);
});

Wait, where are sbt-web and webjars?

Yeaaaaah… so lately, Play has introduced all those stuff on managing assets using SBT, some plugins and webjars. It works just fine but for me, it’s just as strange as if I would install Play from NPM. It’s cool if people want to try to merge two super different worlds (front-end and back-end), be it Node.js or scala.js, but I’m totally not ready for that. As a full stack developer, I want to enjoy each world with its own ecosystem, not trying to bend one into the other.

That’s why in all my projects, front-end assets are handled by front-end tooling (mostly Gulp and NPM lately) and my back-end resources by back-end tools (SBT, Ivy, Maven, …). Not only do I get the best of each, but I do not have some limitations. For example, the other day, I came across a bug on a JavaScript library I was using. No biggy: fork the repo, correct the bug, do a pull request. And while waiting for the author to merge it (which can take some time), I am free to directly use my fork inside my package.json so it’s not blocking at all. It tooks me less than an hour. I would be curious on how to do that using webjars? Anyway, sorry for not planning any proof of concept using webjars if that’s what you are using.

And that’s it! As I said at the beginning of the article, the demo is on GitHub, feel free to clone and play with it. Enjoy and thanks for reading.

Prismic.io: responsive images inside a StructuredText

Posted by on Aug 04, 2014 Comments Improve this article

Once again, I will not explain what prismic.io is (but it’s a tool to manage your website content), I will only focus on one limitation and how to solve it. If you don’t know about prismic.io, you should quickly check its website before reading any further.

What’s the problem?

In prismic.io, you are creating Documents which are composed of Fragments. A fragment is data organized in a more or less semantic way. For example, a fragment can be a Number, or a Date, … Among them, there is the type Image which is really nice because you can not only fully resize / crop your raw picture, but also add smaller images based on the original one for responsive purpose. You can upload your awesome photo of this beautiful sunset, fully sized at 4000 x 3000 pixels, resize it for desktop at 1920 x 1080, and finally add a mobile version at 320 * 480. Note that the last one doesn’t respect the ratio at all, but that’s the goal: you are creating a subset of your image with the best match possible regarding the targeted screen.

Another really powerful fragment in prismic is the StructuredText. It allows you to have a nice WYSIWYG editor inside which you will put new fragments (paragraphs, links, titles, images, …). It would be perfect if the image fragment inside a StructuredText was a real Image fragment, but it is not. You cannot specify any thumbnail. Meaning your content cannot be responsive at all. Your choice is: consume all the bandwith for mobiles with awesome full HD images or make the eyes of your desktop users bleed with super compressed images.

You don’t want to do that!

And me neither! I want to have responsive images inside my StructuredText for God sake. I love tools which make my life easier, but they never should stand on my way to craft the best website possible. I will not lie to you, I couldn’t find any solution that integrates nicely with the WYSIWYG editor since there is no way to extend it. The following solution will be a bit of hacking and your writers might complain about it at first.

So, what’s the idea? The only way to have responsive images is to use a real Image fragment, no choice here. So let’s create one, and let’s put it inside a Group fragment. This way, you can add and remove as many images as you want and they will all be responsive. Wait, you can’t do that inside the StructuredText, right? Indeed, that’s why we will do outside of it and then find a way to bring back the images inside of it. Here is the mask for such a group:

{
  "Images" : {
    "images" : {
      "type" : "Group",
      "fieldset" : "Images",
      "config" : {
        "fields" : {
          "name" : {
            "type" : "Text",
            "config" : {
              "label" : "Name"
            }
          },
          "caption" : {
            "type" : "Text",
            "config" : {
              "label" : "Caption"
            }
          },
          "image" : {
            "type" : "Image",
            "config" : {
              "thumbnails" : [ {
                "name" : "mobile"
              }, {
                "name" : "tablet"
              } ]
            }
          }
        }
      }
    }
  }
}

Pro Tip - When creating thumbnails for an Image fragment, the offical documentation states that both width and height are mandatory. This is wrong, you can specify only one of the two or even none of them. Your UI might be a bit ugly (the resized thumbnails will only display after saving your draft) and sometimes you will have to click twice on a button (this one, I really don’t understand) but otherwise, it works just fine. I hope prismic guys will make those attributes optional because… well… how can you anticipate how the writer wants to resize its image?

As you can see, I added two more fields. One is name and we will use it to reference our images inside the StructuredText. The other one is caption, that’s because I want to put one on my images but prismic didn’t allow me to do that at all by default. Thanks God, using this solution, I can finally do it, killing two birds with one stone. Now we can add as many responsive images as we want, yeah! Let’s grab some orange juice to celebrate!

Let’s hack some HTML

As I already said it, the next step will be to bring those images inside the StructuredText. We cannot do that directly inside the WYSIWYG editor but we can do it when rendering the final HTML. We will need two more things here:

  • having a placeholder inside the StructuredText to indicate we should insert an image
  • extend the default HTML renderer to support such placeholder

As for the placeholder, I decided to use a simple format like {image-[name of the image]}. Meaning that if the content of a paragraph inside my StructuredText is {image-sunset}, it will actually render the image named sunset from the group of images when generating the final HTML. Pretty easy right? Your writers might find it ugly, having to write those strange tags, and not seeing their images directly inside the StructuredText, but just tell them that’s for the greater good. You could, of course, use another syntax, eventually support attributes, whatever.

The final task, and probably the hardest one, is to extend the HTML rendering system. Each primisc fragment has its own way to render as HTML. Unfortunately, there is no way to extend it, you can only, eventually, override it. And the one for StructuredText is by far the most complex one. The easiest way to do that is to copy/paste the default one from the kit you are using (if you are not using any kit, you are free to do whatever you want, so it’s fine) and edit it. You will then edit the part responsible for rendering a paragraph, test if its content match our placeholder syntax, if so, render an image, if not, just render the text as it is. Since I’m using the JavaScript kit, here is the full copy/paste but that’s the important part where we are rendering the image. My model is actually a JavaScript class extracted from the original prismic Document, but you could use the raw Document of course.

Conclusion and limitations

That’s pretty much it. Here is a quick summary of what to do:

  • inside the mask, add a Group of Images with, at least, a Text fragment for the name.
  • choose a naming convention to write placeholders inside a StructuredText, ex: {image-[name]}
  • override the StructuredText HTML rendering function with a custom one inserting images from the Document when finding a placeholder
  • you can add more attributes and render them (like a caption)

What are the bad points / limitations of the solution?

  • writers need to write placeholders rather than having a nice UI to insert images
  • images do not display natively in the WYSIWYG editor anymore
  • you cannot re-order a Group fragment right now, meaning images will be sorted based on their created date and not on their place inside the StructuredText

Doing a conference website in two days

Posted by on Aug 01, 2014 Comments Improve this article

Last week, on Wednesday, I had a meeting with some colleagues at Movio about the Scala Downunder conference. It’s a Scala conference in Auckland, New Zealand, with speakers from Typesafe and local ones, and some replay from Scala Days. I discovered that we were doing the website for the conference here at Movio, and also that it was due for the following Friday in order to send it to Typesafe for feedback…. Ah, and also I would be crafting it.

(Pssst… okay, there has been some commits on the repo after the deadline, but that’s only because the schedule totally changed, not my fault!)

Let’s get started!

So, deadline is 2 days from now, the website is only one long page with all classic stuff: presentation, speakers, schedule, location and sponsors. Really good point: both the design and the content are nearly fully done. Otherwise, it would have been impossible.

My first step was to pick my tooling. I truly hate copy/pasting anything so it was out of question to copy/paste HTML when it comes to iterable data. I needed a static HTML website generator. Since the plan was to publish it on GitHub Pages, I choose Jekyll. This way, I would have the build for free, only needing to push on the gh-pages branch my source code (at the end, that didn’t work at all, but Jekyll is still a good choice). Bootstraping with Jekyll is super fast as long as you already have Ruby installed on your machine.

Next I picked Gulp as my build tool. I love both Grunt and Gulp, but I’m trying to be more familiar with Gulp lately (at some point, I will switch to Broccoli anyway). Here is one important point: you should have snippets for each Gulp (or Grunt) task ready somewhere so you can do your setup in minutes. For example, here, I wanted to use LESS as CSS preprocessor, Browserify for JavaScript packaging and BrowserSync for live-reloading. I already got all those snippets, so I just needed to copy/paste them once in order to have my server running live and compiling all my assets.

Be semantic with your content

Remember, what’s the most important is your content, not the design nor the funky JavaScript effects. And don’t try to rush to apply funny colors or amazing transitions before having a real content behind them or you might not see the real behaviour when implementing them. Since I already have it, I wrote only HTML during the first hour. It allows me to be 100% semantic with my content. I don’t care about CSS selectors or whatever, if this is a paragraph, I create a <p> tag, if this is a section, let’s use a <section> tag. And at the end, you can already see the 0.0.1 version of you website, fully “usable” (just a bit ugly, and tons of typos).

downunder001

After that, I worked to all dynamic parts of my HTML. I really wanted to have full content before starting doing any LESS. I used the _data folder of Jekyll that allow me to structured data in YAML files. I put all speakers and schedule in it. After that, it’s all about the Liquid templating system to render it. You have for loops and if blocks, you have filers to render Markdown or whatever, for such a simple website, it was more than enough.

At this point, we are something like nearly half a day in the project and we have all the content and a nice setup allowing us to be super productive. It could have been faster, but there was quite some content to write and it took me a bit more time than planned to implement all my templating. That’s cool!

Do it with style

As you can image, the next step is doing all the website design. If possible, I prefer to write all CSS classes first, staying close to the HTML semantic and then implement them rather than inventing crazy CSS selectors and then bind them to your HTML. It’s important to know what’s your browser target. Because, trust me, you want to use flexbox but that means IE10+ (using a LESS mixins for old syntax). Lucky me, I could do that. If not, consider using a grid system.

When doing the design, never ever think “Oh, it looks already pretty good… I shouldn’t be far away from the end”. There is this Pareto principle that apply a lot to CSS in my opinion. It means that in just two hours, I had a decent design, but in the and, it costs me a full day to have the final thing. Why? Because after having done the main style, you will start testing on other browsers, on other devices, find small bugs, and spend way too much time to correct them. Also, at some point, if you can, you should review the website with the designer, and this is vital but sometime really hard. He will probably want to reach pixel perfection and nobody can blame him for that, take time and probably some refactoring, and also some hacks. At the end, you will have spent 20% of your time having a nice (but not perfect) design on Chrome, and 80% having a pixel perfect design on most modern browsers, being responsive, and having a decent fallback on old ones.

It was a pretty simple design so it was quite straightforward. Just remember you can do magic with pseudo-elements. I use them a lot for all those forms (circles, traits, …). Flexbox is doing an awesome job to overcome old CSS limitations (Who said vertical align?) and being responsive. This way, I didn’t have do use any open-source CSS project except for my reset, using the awesome normalize.css.

I also decided to use pure SVG rather than a font for my icons. This decision was based on reading great articles from Chris Coyier, like this one, that one and finally this battle one. All worked fine.

Finish the script

During the last half day, I focused on writing the small JavaScript needed for the website. There really wasn’t much of it: double sticky navigation bar, nice scrolling between sections and modal panel for each talk description. Let’s be honest, since I am targeting IE10+, I totally didn’t need jQuery. But if you check the source code, you will find it. Why? I was starting to run out of time and I really wanted to use some nice plugins to leverage all the grunt work. Also, using one of the last version of jQuery, it will be fast enough and will not add too much of a payload. So… it’s fine.

I finally had the chance to use Velocity, a jQuery plugin for animations. It works just great, super fast even on mobile. You should try it. It powers both the scrolling and the modal animation. For the stick navs, I choose the Sticky plugin which is simple and great. Nothing more to add. I packaged all that stuff with Browserify, but more because I love this tool rather than because it was needed.

Going further

So, why didn’t it work to only push to gh-pages? It did at first, but then I started to use more complex features with Jekyll 2.x and, guess what? GitHub Pages still use a 1.x Jekyll version, so it didn’t match. Also, I wanted to do post generation stuff, like usemin to minify all my assets and append then with a hash version, also updating the index HTML file. I just had to wrote a simple deploy script in order to overcome the problem.

I don’t really want to go through all the details, but you can see the final result, read the full source code, and here are a few highlights:

Hope you liked it! Thanks for reading.

Subscribe