parrisneeds.coffee code, music, drinks

JS - Vanilla JS in 2016 - Maybe the future is closer than we thought

This story starts with a recent job hunt. One of the companies I applied for asked me to create an interface that searches Twitch. Twitch, for those of you who don't know, is a site where people can stream their activities, mostly around gaming.

The prompt at a high level looked something like this:

  1. One week time limit
  2. Take your time, make it perfect, show us how you'd build a system
  3. No libraries
  4. Must use JSONP

GitHub: https://github.com/parris/twitchstreamsearch/

So, what do you do when people say "one week", "make it perfect", and "no libraries"... Well, if you're anything like me you'll basically make all your own libraries and spend an entire week doing it.

From the very get-go I started thinking about writing my own little mini-react and mini-redux. The next problem I knew I'd face was around module loading or cleanly splitting up my files. Step one for me though was actually to start-off with a jasmine like test runner so I can experiment and safely modify my code.

What followed from then on, was me learning that vanilla, plain-ol-js in 2016 is insanely powerful. Right now Chrome and FF support everything from

template literals

/src/components/Card.js#L23-L24

`${stream.game} - ${stream.viewers} viewers`

to arrow functions

/src/components/Header.js#L13-L18

(e) => {
    e.preventDefault();
    let queryInputEl = document.querySelector('.js-query-input');
    this.props.onSearch(queryInputEl.value);
}

to symbol property keys

/src/reducers/streams.js#L8-L12

const transformers = {
    [actionTypes.search]: function(state, action) {
        return [];
    },
    [actionTypes.searchComplete]: function(state, action) {

to classes

/src/components/Header.js#L9-L11

class Header extends Component {
    events() { ... }
    render() { ... }
}

to promises

/src/utils/request.js#L12

let requestPromise = new Promise(function(resolve, reject) { ... }

to block scoping, const/let, trailing commas, the fetch API, and more.

There were only 2 things missing from my typical Babel/Webpack world.

First off, I was missing a module loader and an easy way to split my files up. In this excercise, I relied on AMD to fill this gap. Functionally, AMD works similarly to the ES6 module proposal. It appears that ES6 modules will indeed be async in their final form. In addition, with the advancements around HTTP2 there should be no need for a separate build step, since HTTP2 prefers many smaller files over 1 large built file.

Modules in my excercise were defined like this:

define(function(require) {
    'use strict';

    const build = require('/src/utils/componentBuilder.js');
    const Component = require('/src/components/Component.js');
    const { Button, Div } = require('/src/components/BasicComponents.js');
    const i18n = require('/src/utils/i18n.js');

    return {};
});

Between "define" and "require" in the above example, the system figures out how to load the tree of dependencies in browser.

Second, Babel also helps us when we are writing React components. In this excercise, I created a component builder that mimics React's composibility. React of course could be used without sans JSX. While writing this post I tried to see if web components were in a mature enough state to use, but it seems that even Chrome doesn't really have support for them out of the box.

I've been working in Babel JavaScript land for quite sometime now (maybe the past 2 years). This is the first time it felt like I didn't need Babel/Webpack anymore. We still seem to be missing a few key elements (modules and components), but we don't seem that far off, and as my example proved those limitations can be overcome today. Maybe the future is closer than we thought! Maybe someday we can drop these bulky build steps!

Links:

  1. GitHub: https://github.com/parris/twitchstreamsearch/
  2. Live Demo
  3. Test Runner

Talks - Nodevember - OpenGL/WebGL + Text (Signed Distance Fields)

At the last Nodevember in Nashville I talked about WebGL (and OpenGL) and the ways you can render text with it. You can read about it in detail here.

Check out the full video here:

and my slides:

FlexBox Formulas - the Search and Go

We've seen "search and go" since the dawn of the internet. It still remains fairly ridiculous to get right.

Typically, we'd fix the width of the input or the button and use one of a few techniques to get the size of the other correct. This breaks down when you have a fluid page and translations.

I'll tell you what you want! You want a beautiful, 100% width, sexy, awesome input box next to a beautiful, perfectly translated, shiny button that just screams "click me!" So let's make this happen. We do have the technology (well mostly).

Show me the code:

Why Dorsal.js

Problem 1: You have all these server side pages and lots of JS components that need to get initialized on every page.

Problem 2: You want to stop manually initializing your JS components with the same code over and over again.

Problem 3: Your backend developers don't really know JS, but they occasionally write some HTML.

Yea. Dorsal fixes all of that.

So what is Dorsal? In short, Dorsal allows you to markup your HTML with classes and data attributes and in return automatically initializes components. These components could be written using Backbone, Marionette, plain JS or whatever else!

Example:

Some developer writes some HTML somewhere on the website like so: <input type="checkbox" class="js-d-toggle" data-d-size="large" />

You had previously created a plugin called 'toggle', and it gets loaded onto the page (however you choose to do so).

var dorsal = requre('dorsal'),

      ToggleView = require('./toggle_view');

dorsal.registerPlugin('toggle', {
    create: function(options) {
        return new ToggleView(options.el, options.data.size);
    },
    destroy: function(options) {
        options.instance.remove();
    }
};

Somewhere on your page, after all the HTML has been written, and possibly after DOM ready you can run wire.

dorsal.wire();

Wire scours the DOM for js-d- prefixed classes, matches them with plugins and passes the element and options into the plugin.

Why???

During our recent responsive push at Eventbrite we needed to be able to allow everyone to simply and quickly initialize JS components. We anticipated our build system getting slower from 100 new files that needed to get run through the RequireJS optimizer. We also anticipated that all of these files would contain pretty much the same code. They would have looked something like:

define(function(require) {

    var SelectBox = require('styleguide/js/select'),
          ToggleSwitch = require('styleguide/js/toggle'),
          ResponsiveTable = require('styleguide/js/tables'),
          // ... up to 20 other plugins

    new SelectBox(el1);
    new Selectbox(el2);
    //... 20 other select boxes, 5 other toggle switches, etc

});

Basically all that, on 80+ pages.

Instead we created Dorsal, and internally what we call our standard component library. The component library loads up all of our standard components, which each have Dorsal plugins in them. Dorsal then runs on domReady and everyone is happy.

But WAIT there's more

Lately, we've been using Dorsal for more. We've been using it in the onRender methods in Marionette classes. We've been using it for one off components that we think might fit nicely into our styleguide eventually. With the lack of a pre-rendering service we've been using it on SEO sensitive pages. We basically were able to keep all our existing Backbone and Marionette code and make our workflow feel closer to React or Angular. We felt this was actually the largest missing piece from the Backbone/Marionette community and so we made it open source! Check it out Dorsal on NPM!

ReactJS workflow with Mocha, JSDom, Gulp, Browserify, CoffeeScript and SublimeText

When I first heard about ReactJS, CoffeeScript, Gulp and all these other new libraries I immediately thought “oh god why.” Over the past few months I’ve used each one separately, and learned about why they are cool or useful. Mostly the benefits involved less typing or some performance enhancement.

Whenever you need to integrate some new libraries it often takes time to figure out how they fit together into some usable workflow. Slowly, I learned how to integrate all of these libraries so here’s my report on all of that.

Gulp/Browserify

Myself and a friend set up Eventbrite's RequireJS and Grunt infrastructure. Those tools were the right choice in that stack. I've been using browserify off and on for side projects for the past year and in comparison the simplicity is great.

I did learn that Gulp doesn't really buy you much when using Browserify, due to Browserify's plugin system, however Gulp is still pretty useful. Gulp's piping method seems to be more ideal when making multiple transformations on files.

Err... all the cool kids are using, so I wanted to try it out.

Setting up gulp to use browserify is easy. In order to get coffee script and react working with browserify and gulp you should use the coffee-reactify transformer. Make sure to first npm install coffee-reactify –save-dev

SublimeText

You should just need to install the "Better CoffeeScript" and "ReactJS" packages. They now support the ".cjsx" extension out of the box.

I went ahead and configured all my .coffee files to use that flavor of Coffeescript. To enable this click "View" -> "Syntax" -> "Open all with current extension as..." -> "ReactJS" -> "Coffeescript". You may need to restart Sublime during and after this process.

Unit Testing ReactJS via Mocha

You could use Facebook's new test framework Jest, but honestly I'm just used to Mocha. It's older, and has lots of integrations figured out. That being said you need to do some set up to get React and Mocha playing nicely.

First you should: npm install coffee-react --save-dev, and set it up in your mocha.opts:

Next, you'll need to set up jsdom. You should start by running npm install jsdom --save-dev.

Then go ahead set up a spec_helper and place that in your test folder:

This spec_helper attaches sinon to @ for every test it also exposes document and window as a global which ReactJS needs.

Now let's write some tests:

In this test we stub out some Parse methods, create some fake elements using JSDom, Simulate a click and make sure some text is present. Everything a happy healthy test could ever want. Now with some great syntactic sugar.

Side benefits

  1. Using this system we can actually now also render our code server side! Notice that the Mocha test runs completely server side.
  2. Speed. React is lighting fast!
  3. Reduction in code complexity. I spent 1 day setting up an authentication system. It is rock solid and even has unit tests and other great features.

Conclusions

The great thing about this workflow is that it is 100% browser independent. This makes your unit tests, well, more unit test like. It makes your entire code base more modular and componentized.

Finally, if you need to integrate just one more library it is likely someone has already created something, just go explore :). I have more code snippets of this architecture. You can ask me to post them via Twitter: @parrissays. Also, I'm thinking about a yeoman generator for the next time I want to write an app using this architecture.