pipe-dom: DOM manipulation with the F#-style pipeline operator

Last week, Babel released version 7.5.0, which included our implementation of the F#-style pipeline operator. If you’re not already aware, TC39 is exploring the potential of a pipeline operator in JavaScript. You can learn more about what’s going on with the operator here. At this point, we’ve got all 3 proposals in Babel, so the next step is to start getting feedback on them. To help with that, I’ve put together a small DOM manipulation library, pipe-dom, for the newly-released F#-style pipeline operator.

When new developers start learning JavaScript, they often start with jQuery. One of jQuery’s defining characteristics is its fluent API, which allows you to write clear, concise code when making a series of DOM modifications. The major downside to this API style is all of these methods need to be attached to the jQuery object, which means the entire library needs to be loaded to be usable. Minified and gzipped, jQuery is ~30KBs, which is a lot to include if you’re just trying to toggle classes.

With the introduction of modules to JavaScript, bundlers are able to analyze what’s used in a project and remove unused functions, a process called tree-shaking. pipe-dom takes jQuery’s fluent API and combines it with the pipeline operator, allowing users to import the exact DOM methods they want and let bundlers remove the rest. Here’s what that might look like:

import { query, addClass, append, on } from 'pipe-dom';

query('ul')
  |> addClass('important-list')
  |> append(
    document.createElement('li')
      |> addClass('important-item')
      |> on('click', () => console.log('item clicked'))
)

With this, your bundler can include just the code for query, addClass, append and on, and discard the rest of the library.

I’ve included a small API initially to get the idea out there, but I’m very interested in expanding it, so please open an issue and suggest ideas! I’m very open to expanding the API and evolving the library along with pipeline operator best practices.

Check it out and let me know what you think!

Using TypeScript tagged unions for loading state in Redux

Dealing with loading state is a core requirement of most apps you build. Every app needs data, that data almost always needs to be loaded from somewhere, so you need to manage your loading state. Redux doesn’t provide any particular structure for this, but combining it with TypeScript enables some useful patterns. Let’s take a look at a few ways of handling it, some downsides to those approaches, and conclude with an approach I’ve used successfully on a few projects.

We’ll be using TypeScript throughout the examples, but much of the concepts here are useful without the types. The TypeScript-specific content comes towards the end, so I would encourage JavaScript-only developers to read through to the end. Even if you don’t use TS, the concepts applied with it can be applied with JS as well.

Naive Approach

The most common structure you’ll see for this looks like this:

type State = {
  items: Item[];
};

const defaultState: State = {
  items: []
};

This seems pretty simple; start with an empty array, add additional items to the items keys when you fetch them, and you simplify the checks in your views. But we’ve already got a problem: there’s no distinction between "haven’t loaded any items" and "successfully loaded no items". This can work for some apps, if they’re really simple or they die on load failure (like a CLI app), but for most of your typical web apps, this is going to be a problem.

So let’s toss in a null instead to indicate that the items haven’t been loaded yet:

type State = {
  items: Item[] | null;
};

const defaultState: State = {
  items: null
};

So now we can tell the difference between whether things are loaded or not: if state.items === null, they’ve been loaded. So far so good.

But what happens if the server errors? We can’t represent an error state with this setup. How do we do that?

Handling error & loading states

We could solve this by adding an error key to the state:

type State = {
  items: Item[] | null;
  error: Error | null;
};

const initialState: State = {
  items: null,
  error: null
};

const successState: State = {
  items: response.data,
  error: null
};

const errorState: State = {
  items: null,
  error: response.error
};

This allows us to represent the initial, loaded & error states, with the examples above expressing those possibilities. It is a bit onerous to derive those states though. You have to check both of the values in order to figure out where you’re at, because at the "unloaded" step, both are null. A conditional check could look like this:

if (state.items === null && state.error === null) {
  return ;
}

if (state.items !== null) {
  return ;
}

// We can assume we're in the error state now
return ;

There are various ways of structuring this conditional, and all of them are variously ugly in their own particular way. You could extract these conditionals to functions, which would at least give them readable names.

However, this sate can’t tell us whether the API request has started or not. If this is a case where the API request starts immediately, then the difference is immaterial to the view. But if you need this information, you could add another property to indicate whether the request is loading or not:

type State = {
  loading: boolean;
  items: Item[] | null;
  error: Error | null;
};

const defaultState: State = {
  loading: false,
  items: null,
  error: null
};

And the conditional complicates accordingly:

if (state.loading) {
  return ;
}

if (state.items === null && state.error === null) {
  return ;
}

if (state.items !== null) {
  return ;
}

return ;

Unionize!

But let’s take a step back: we’re really trying to represent various states of the API request by checking the effects of the requests. Instead, we should just represent the current loading state explicitly:

type State = {
  status: 'unloaded' | 'loading' | 'loaded' | 'error';
  items: Item[] | null;
  error: Error | null;
};

const defaultState: State = {
  status: 'unloaded',
  items: null,
  error: null
};

Now, we have all of our states represented by string, indicating exactly what state the API is in. The conditional itself gets simplified as well: we can now use a switch statement to exhaust all possible states:

switch (state.status) {
  case 'unloaded':
    return ;
  case 'loading':
    return ;
  case 'loaded':
    return ;
  case 'error':
    return ;
}

Now we’re talking! There’s a very clear mapping between the various states and their related views, and you know that if you’re in an error state or a loaded state, what data is available to you: error state always has an error object, loaded state always has the array of items, and the loaded view itself could display a "no items found" if the array is empty.

This approach is easily extensible as well. You can add 'reloading' and 'reload-error' states, in case you need to refresh data while displaying the stale data at the same time. It’s much more powerful and flexible than adding random keys and hoping you can continue to figure out what’s happening based on the data you have.

In JavaScript, there isn’t much more to be done. Since you don’t have to write up the types in any meaningful way, you know that when status === 'loaded', state.items is the array of items, and you could move on. But if you’re using TypeScript, you’ll need to represent that relationship in the type system. In fact, the above example will error in TypeScript, as state.items could be null. We can solve this with tagged unions.

Tagged unions

Let’s start by looking at tagged unions. A tagged union allows us to combine two types and discriminate between them with a tagged property.

type First = {
  tag: 'first';
  prop: number;
};

type Second = {
  tag: 'second';
  value: string;
}

type FirstOrSecond = First | Second;

declare function getFirstOrSecond(): FirstOrSecond;

const thing: FirstOrSecond = getFirstOrSecond();

switch (thing.tag) {
  case 'first':
    // TypeScript knows prop exists and is a number.
    // Accessing thing.value would cause an error.
    return thing.prop + 1;
  case 'second':
    // And value exists and is a string here
    return thing.value + ' came in second';
}

We’ve created a union type, FirstOrSecond, from two types with an overlapping property, tag. The types can have any additional properties they’d like, as long as there’s one overlapping property, with a constant of some kind, that TypeScript can use to discriminate between the types. Actions in Redux, with their type property, are another common example of this, and typesafe-actions makes it easy to implement in that case.

However, this does not work with random properties. It’s a common complaint people have: if you have a union type where the dependent types have no overlapping properties, you can’t check for the existence of one of those properties to determine which type you’re looking at it. This does not work:

type First = {
  prop: number;
};

type Second = {
  value: string;
};

type FirstOrSecond = First | Second;

declare function getFirstOrSecond(): FirstOrSecond;

const thing: FirstOrSecond = getFirstOrSecond();

// TypeScript complains about Second not having a `prop` property.
if (typeof thing.prop === 'number') {
  // TypeScript does not know you have a First
  // without an explicit cast
  const prop = (thing as First).prop;
} else {
  const value = (thing as Second).value;
}

Now that we understand what a tagged union type is, we can use this concept to tag our various loading state, and their associated properties:

type UnloadedState = {
  status: 'unloaded';
};

type LoadingState = {
  status: 'loading';
};

type LoadedState = {
  status: 'loaded';
  items: Item[];
};

type ErrorState = {
  status: 'error';
  error: Error;
};

type State = UnloadedState | LoadingState | LoadedState | ErrorState;

Now, in the switch statement above, we know, at the type level, what properties are available to us. No more null checking or ugly cast throughs–just a clean description of the various states and their associated and known-to-be-present properties.

Conclusion

Next time you need to implement a data loading scheme, start off with a version of this, and it’ll make your data much easier to extend over time.

What yo-yos taught me about being a developer

When I was in 4th grade, a yo-yo fad passed through my middle school. Because they were cheap, we all got one, and we were obsessed. We debated which brands were the best, which yo-yo style worked well, and how to do tricks. This was all pre-Internet and entirely word of mouth. We’d hang out in little groups at school, showing off what we’d learned and teaching each other.

This past Easter, my mom made us Easter baskets of goodies for our family. Besides the normal varieties of sweets–mostly chocolate, including some British (!) candy bars (omg Lion)–in the basket was… a yo-yo. I pulled it out, wound it up, and dove into several of the tricks I had learned when I was younger: Walk the Dog, Cat’s Cradle, and the Boomerang (don’t quote me on these names). It felt good to flex those old muscles, and impressed my family at the same time 😎.

I started my career in social media and spent four years working at companies where I was mostly the only person doing social media marketing, and I didn’t work with people who knew more than I did (and I did not know much). While I had a lot of freedom to do what I wanted (within reason), I missed out on a ton of learning. No one told me I was doing something wrong, or dumb, or what I could be doing way better, or more of, or whatever.

When I transitioned to web development, I worked at a company where I was surrounded by people smarter than me; who knew more than me; who could teach me things I didn’t know; who could answer questions I had; who I could debate with. While I dedicated time to self-learning, my most important learning experiences were the ones I got from other developers.

Years later, despite having not picked up a yo-yo since middle school, those tricks were still fresh in my mind. The learning process was social–your friend stood there and taught you the trick, highlighting what you’re doing wrong and correcting mistakes, until you’ve finally got it, and it worked so well I never lost those skills.

Learning development is no different. You can read all the books you want, but the feedback loop of regular review of and conversations around your code can accelerate the process–no book is going to tell you implemented its pattern wrong! We have a reputation for being quiet loners, but learning is a social process. Be social! I am eternally grateful both to my colleagues and my communities for everything they’ve taught me. They will be an invaluable resource to you.

Weekly Links – Nov. 25th, 2018

Transducers in JavaScript

Array#reduce is one of those things that can be difficult to develop an intuition for, but if you can, what makes it powerful is how reducers (the function you pass to it) can be composed together. It’s an idea that I keep reading about in an attempt to get my head around it, but I only really see glimpses of what makes them great. Eric Elliott gives me another glimpse of them.

TypeScript: Was it worth it?

Probably one of the first articles that doesn’t conform to Betterridge’s Law, this look at TypeScript actually finds the past few years with it have been great. I’m increasingly coming around to seeing it as useful in my current project; the only issues to overcome will be configuring it to work with our libraries and learning it. Both of those are hurdles that are worth going through to get TypeScript working.

Company Culture

As I’m looking at hiring my first front-end developer, I’m also thinking critically about what our culture is. The kind of culture that makes Asana one of the best places to work is something I’d like to draw from. A big part of making that work is getting consistent feedback and adjusting to it. Dennis Plucinik, who I had the pleasure of discussing this with on Friday, also wrote about building a team, which centers around respect: respecting their time, autonomy, and goals. Both have given me something to think about as we wrap the initial build & move into maintenance.

Weekly Links – Nov. 18th, 2018

More Functional CSS

I’ve been rebuilding my site into Gatsby with Tailwind, and I’ve really been quite enjoying it so far. The limitations it imposes force you to limit the amount of CSS you have to write, so I’ve been intrigued to see more articles pop up about it. This article from CSS-Tricks explores whether you could combine Functional CSS with a more traditional CSS approach. While I found the article interesting, I had one minor quibble:

Secondly, a lot of CSS property/value pairs are written in relation to one another. Say, for example, position: relative and position: absolute. In our stylesheets, I want to be able to see these dependencies and I believe it’s harder to do that with functional CSS. CSS often depends on other bits of CSS and it’s important to see those connections with comments or groupings of properties/values.

I actually find this to be an advantage for Functional CSS. I like that I have the classes absolute & relative in my HTML, where it’s very clear where they are in relationship to each other.

I still need more experience with it, so we’ll see how it works as I finish up my site.

Pipelines in JavaScript

If you know me, you know I’ve been working on bringing the Pipeline Operator to JavaScript. We’re currently working on implementing the operator in the Babel parser, so things have stalled out while that work is underway. Despite that, enthusiasm in the community remains high, but with several proposals in competition, it can be difficult to keep an eye on what’s going on. That’s why I was really excited to see LogRocket write about the proposal and nail all the details. Definitely check that out if you’re wondering what the latest is.

Wow, Facebook

The other big news out of the past week is Facebook’s execs have something pretty messed up, hiring a firm who smeared its critics with both anti-Semitic conspiracy theories as well as charges of anti-Semitism. Obviously, the moment that was published, they cut ties with said firm, but the damage is already done. Not only have they been embroiled in controversy for a few years now, they have completely bungled every response to their problems. The irony of Facebook, the best platform for conspiracy theories, spreading its own conspiracy theories is too much.

Weekly links – Week of Nov 11th, 2018

Unit Testing

When you’re working at a startup, we’re building out new features so fast that we’ve not-irregularly introduced bugs to already-complete parts. We don’t have a dedicated QA team and few tests, and we’re looking to get some backstopping going so we can continue to ship with confidence.

While I’m looking at eventually integrating E2E testing with Cypress, I’ve been reading about unit testing to see how they could help us. Interestingly enough, I’m not sure we would. The errors we get are triggered by a series of steps that we probably wouldn’t reproduce in unit tests, so they wouldn’t help prevent these issues.

We could do some integration-type testing, bootstrapping a full or mocked store and dispatching a series of actions to see what results before we get to a full E2E integration, but it feels like unit testing will not be that helpful unless we can unit test large chunks like an entire container.

The articles this week also argue they not only don’t provide a lot of coverage but make it difficult for your application to change. I agree with this to the extent that your architecture is still changing. As you settle into it, you can start to capture the corner cases in your tests in ways that allow it expand its functionality without breaking what exists. That does mean it’s not useful to us yet.

Amazon is Coming to Queens

The big news this week was the leak that Amazon had decided on two cities for it’s new HQ2(.1/.2?): Arlington & Long Island City, Queens, in New York City. Along with this announcement, we discovered the tax incentives for Amazon coming to Queens could top $3 billion dollars. I’ve read a couple numbers, and the totals depend on how you calculate the incentives, but even the lower end is at least $1 billion.

This came out on the evening before the midterms, while all eyes were on the results of the election, but even with this, we’re already seeing a pretty strong reaction to the news. The process through which Amazon chose which city plays cities off each other, and there have been a couple of calls to make it illegal.

More importantly, it’s not even clear the city will benefit enough to offset the amount of money it’s giving away. The last article below, from the conservative Washington Examiner, goes through the data on these sorts of tax breaks and argues they’re not beneficial, as they don’t factor in to a company’s location planning (Amazon would have chosen New York City anyway) and the city will benefit more from their move if they don’t give away almost $3 billion in the process.

It’s weird to see a conservative publication agree with a socialist, but there’s a shared recognition that this does not benefit the city. There’s still time to fight this, but not much, so let’s get moving.

Array Update Trick: What it is and how it works

The other day, I was looking at some code that did an immutable update of an array, replacing the value at one index with a new value. It used a slice-based method, which took 2 spreads and several lines of code after prettier was done with it. After looking at it for a bit, I came up with a new method for doing this update. Let’s take a look at the code and walk through how it works:

The code

Assuming we have array arr with three element, [1, 2, 3], and we want to update index 1, here’s how we could do it:

const arr = [1, 2, 3];
const newArr = Array.from({ ...arr, 1: 3, length: arr.length });
console.log(newArr); // [1, 3, 3]

The explanation

The interesting work happens on line 2, where we call Array.from. If you’re not familiar with Array.from, it takes an object and an optional mapper function and returns an array. It converts the object into an array and optionally calls the mapper function on each element. It can be used to convert both iterable and array-like objects into a plain JavaScript array.

The first thing we see is the spread. Note that this is an object spread, and not an array spread, so we’re spreading the array’s keys into a new object with numeric keys. An array’s keys are own properties, so doing a spread keeps them in the resulting object:

const arr = [1, 2, 3];
const newObj = { ...arr };
console.log(newObj); // {0: 1, 1: 2, 2: 3}

When you spread, you can update keys by placing them after the spread, so we can do the below to update the object with new keys.

const arr = [1, 2, 3];
const newObj = { ...arr, 1: 3 };
console.log(newObj); // {0: 1, 1: 3, 2: 3}

However, if we attempted to pass this into Array.from, it would produce an empty array, because the object is neither iterable nor array-like. According to MDN, "array-like" objects are "objects with a length property and indexed elements." We know the object has numeric keys, but length is not transferred because it’s not enumerable and object spread only transfers enumerable own properties of the object. In order to make the result "array-like," we need to give it the length property explicitly.

The final result again:

const arr = [1, 2, 3];
const newArr = Array.from({ ...arr, 1: 3, length: arr.length });
console.log(newArr); // [1, 3, 3]

Immutable array updates can be annoying. Hopefully this little trick will make them easier for you.

Weekly Links – Week of Nov 4th, 2018

This week, I updated my James Reads site to use Gatsby, powered by a combination of Pocket & the WordPress site that currently resides on that domain. I do a lot of reading on Pocket, and I’ve been meaning to figure out a way to display both Pocket- & WP-saved links there. Initially, that was going to be pulling in my Pocket list into WordPress, but I’m considering moving away from WordPress as Gutenberg controversially lumbers towards a release. In the meantime, spinning up a Gatsby site was really easy and allowed me to decouple the data source from the front-end display of that data, so I can eventually move the data source without needing to rewrite my front-end. If you’re interested, you can see the source here.

Because I’ve now finally got all my readings up in one place, I can start doing what I’ve been meaning to do for a long time: start a weekly link post! I don’t do enough writing, and this seems like a good way to get into a regular habit without having to commit a ton of time to start. So, without further ado, here’s some highlights of what I’ve been reading and thinking over the past week:

GraphQL

We’ve been considering GraphQL at work to solve our data fetching issues. We’ve got a number of charts & graphs that need data from a few different endpoints, and we’re looking at whether providing a GraphQL API would help simplify things. I’m currently a bit hesitant; a lot of the implementations of GraphQL with React use components to declare their data needs, and my current feeling is components are for display/UI and shouldn’t be tied to data fetching. I’ve been using Redux and have been pushing to get as much of that handling out of components and into middleware, so GraphQL seems like a step backwards.

That said, being able to send a single request instead of a half-dozen would be really nice, and it’s possible I’m being too rigid. The PayPal experience was glowing, and certainly made it easier for them to iterate on what they were building compared to the previous REST-y approach. It was also great to see some of the downsides, but most of those downsides are on the back-end, where it definitely increases the complexity. We’d have to add Node to our stack, and while it makes front-end querying easier, making sure the queries work on the back-end could be more difficult.

I’m also still looking to see if anyone is going GraphQL queries in Redux middleware, rather than in the components, but that seems like mostly a "no" so far. If you are, I would love to hear from you!

Functional (or Utility-first) CSS

The other sore spot I’m spending time looking into is our CSS stack. I’ve used styled-components on two projects now, and I can’t say I’m a huge fan at this point. It makes it difficult to visualize the resulting DOM structure, as every element is a styled-component with a name. Former coworkers have reported performance issues with it, although some of that may no longer be an issue in v4. Although this is probably true of most CSS solutions, I’m finding it requires discipline to not reimplement the same styles multiple times. You really need to be aggressive in extracting CSS either into the theme or shared components for reuse.

Some of this is admittedly on us as users, but it feels like a question of what the tech affords you. For these reasons, I’ve been looking hard at Functional CSS as a paradigm going forward. I’m using TailwindCSS on the aforementioned Gatsby site, and part of what I like is how limiting it is. You can write your own CSS, if you must, but you’re not encouraged to do so. Instead, it pushes you to reuse the dozens of CSS classes that already ship with Tailwind. It’s also a lot easier to visualize your HTML, as all the underlying elements are still there, plus you can look at those elements to visualize exactly what CSS is going to be applied. Lastly, the overall design system in then embedded in these minimal number of classes, so you’re limited as to the number of styles you can use at any given time, which enforces more consistency.

It also results in a lot less CSS overall, as each component doesn’t require you to write CSS to style it. I’ve been really excited by how well it has worked on my Gatsby site, and I’ve been looking at whether & how we can apply some of these principles to styled-components, as a complete overhaul is out of the question at this time. Looking at some of those experiences with Functional CSS has been really enlightening.

Voter Disenfranchisement

The midterms were Tuesday, and one of the "memes" that pops up around every election is complaints about the large swath of people who don’t vote. There are, admittedly, some people who explicitly choose not to vote; they believe it doesn’t matter, their vote doesn’t count, both parties are the same, etc. I’m not going to equivocate: those people are wrong–aggressively, stupidly wrong. I remember seeing this comment in one of the lefty groups I’m in: "If voting had the power to change things, they would have taken it away from you." Which is dumb, because they are trying to take it away from you.

On the flip side, those who look down on non-voters generally assume apathy and come with a tone of condescension. The worst part is it doesn’t typically come from an understanding of why people don’t vote, nor does it offer solutions to the real difficulties people have voting.

All of this is on my mind as I read reports from Georgia of 4 hour lines to vote, voting machines locked away unused, and purges of registered voters. So I read the below two articles with interest, especially looking at why young people in particular don’t vote.

The assumption has always been that they don’t care, but the argument Jamelle Bouie makes is the systems are simply not designed to enable individuals with unstable lives to vote. If you move a lot, as young people do, updating your registration every couple of months is a hassle. If you need an ID to update said registration, now there’s another barrier to getting there. If you don’t have access to a car or public transit, getting to the locations to get either of these things becomes another barrier.

This doesn’t just apply to young people either, but to anyone living unstable lives, which are often poor or minorities. Voting takes place on a Tuesday, so voters have to take off work to vote (especially if they have to stand in a 4hr line to do so), and many states don’t have early voting (like my home state, New York, which has abysmally low turnout) or allow vote-by-mail. On top of all that, add the explicit barriers to voting, such as voter ID laws (in TX, you can use your gun or military license to register but not your student or employer ID) and closed polling locations, and you end up with a system that both passively and actively makes it difficult for people to vote.

So when I hear people complain about non-voters, I’m not hearing solutions besides "try harder." We as a culture love to blame individuals for systemic problems, and if you’re actually interested in getting people out to vote, we need to focus on the barriers to voting instead of castigating individuals for not climbing over them.

Maybe if voting didn’t suck, more people would vote? Just a thought…

The Roots team invited me to write a blog post about WP-Gistpen hitting 1.0 (which it finally did recently!). I provide a quick overview of why I built the plugin and what it does. Check it out!

This post is part of the thread: Project: WP-Gistpen - an ongoing story on this site. View the thread timeline for more context on this post.