Weekly Links – Nov. 25th, 2018

Transducers in JavaScript

Array#reduce is one of those things that can be difficult to develop an intuition for, but if you can, what makes it powerful is how reducers (the function you pass to it) can be composed together. It’s an idea that I keep reading about in an attempt to get my head around it, but I only really see glimpses of what makes them great. Eric Elliott gives me another glimpse of them.

TypeScript: Was it worth it?

Probably one of the first articles that doesn’t conform to Betterridge’s Law, this look at TypeScript actually finds the past few years with it have been great. I’m increasingly coming around to seeing it as useful in my current project; the only issues to overcome will be configuring it to work with our libraries and learning it. Both of those are hurdles that are worth going through to get TypeScript working.

Company Culture

As I’m looking at hiring my first front-end developer, I’m also thinking critically about what our culture is. The kind of culture that makes Asana one of the best places to work is something I’d like to draw from. A big part of making that work is getting consistent feedback and adjusting to it. Dennis Plucinik, who I had the pleasure of discussing this with on Friday, also wrote about building a team, which centers around respect: respecting their time, autonomy, and goals. Both have given me something to think about as we wrap the initial build & move into maintenance.

Weekly Links – Nov. 18th, 2018

More Functional CSS

I’ve been rebuilding my site into Gatsby with Tailwind, and I’ve really been quite enjoying it so far. The limitations it imposes force you to limit the amount of CSS you have to write, so I’ve been intrigued to see more articles pop up about it. This article from CSS-Tricks explores whether you could combine Functional CSS with a more traditional CSS approach. While I found the article interesting, I had one minor quibble:

Secondly, a lot of CSS property/value pairs are written in relation to one another. Say, for example, position: relative and position: absolute. In our stylesheets, I want to be able to see these dependencies and I believe it’s harder to do that with functional CSS. CSS often depends on other bits of CSS and it’s important to see those connections with comments or groupings of properties/values.

I actually find this to be an advantage for Functional CSS. I like that I have the classes absolute & relative in my HTML, where it’s very clear where they are in relationship to each other.

I still need more experience with it, so we’ll see how it works as I finish up my site.

Pipelines in JavaScript

If you know me, you know I’ve been working on bringing the Pipeline Operator to JavaScript. We’re currently working on implementing the operator in the Babel parser, so things have stalled out while that work is underway. Despite that, enthusiasm in the community remains high, but with several proposals in competition, it can be difficult to keep an eye on what’s going on. That’s why I was really excited to see LogRocket write about the proposal and nail all the details. Definitely check that out if you’re wondering what the latest is.

Wow, Facebook

The other big news out of the past week is Facebook’s execs have something pretty messed up, hiring a firm who smeared its critics with both anti-Semitic conspiracy theories as well as charges of anti-Semitism. Obviously, the moment that was published, they cut ties with said firm, but the damage is already done. Not only have they been embroiled in controversy for a few years now, they have completely bungled every response to their problems. The irony of Facebook, the best platform for conspiracy theories, spreading its own conspiracy theories is too much.

Weekly links – Week of Nov 11th, 2018

Unit Testing

When you’re working at a startup, we’re building out new features so fast that we’ve not-irregularly introduced bugs to already-complete parts. We don’t have a dedicated QA team and few tests, and we’re looking to get some backstopping going so we can continue to ship with confidence.

While I’m looking at eventually integrating E2E testing with Cypress, I’ve been reading about unit testing to see how they could help us. Interestingly enough, I’m not sure we would. The errors we get are triggered by a series of steps that we probably wouldn’t reproduce in unit tests, so they wouldn’t help prevent these issues.

We could do some integration-type testing, bootstrapping a full or mocked store and dispatching a series of actions to see what results before we get to a full E2E integration, but it feels like unit testing will not be that helpful unless we can unit test large chunks like an entire container.

The articles this week also argue they not only don’t provide a lot of coverage but make it difficult for your application to change. I agree with this to the extent that your architecture is still changing. As you settle into it, you can start to capture the corner cases in your tests in ways that allow it expand its functionality without breaking what exists. That does mean it’s not useful to us yet.

Amazon is Coming to Queens

The big news this week was the leak that Amazon had decided on two cities for it’s new HQ2(.1/.2?): Arlington & Long Island City, Queens, in New York City. Along with this announcement, we discovered the tax incentives for Amazon coming to Queens could top $3 billion dollars. I’ve read a couple numbers, and the totals depend on how you calculate the incentives, but even the lower end is at least $1 billion.

This came out on the evening before the midterms, while all eyes were on the results of the election, but even with this, we’re already seeing a pretty strong reaction to the news. The process through which Amazon chose which city plays cities off each other, and there have been a couple of calls to make it illegal.

More importantly, it’s not even clear the city will benefit enough to offset the amount of money it’s giving away. The last article below, from the conservative Washington Examiner, goes through the data on these sorts of tax breaks and argues they’re not beneficial, as they don’t factor in to a company’s location planning (Amazon would have chosen New York City anyway) and the city will benefit more from their move if they don’t give away almost $3 billion in the process.

It’s weird to see a conservative publication agree with a socialist, but there’s a shared recognition that this does not benefit the city. There’s still time to fight this, but not much, so let’s get moving.

Array Update Trick: What it is and how it works

The other day, I was looking at some code that did an immutable update of an array, replacing the value at one index with a new value. It used a slice-based method, which took 2 spreads and several lines of code after prettier was done with it. After looking at it for a bit, I came up with a new method for doing this update. Let’s take a look at the code and walk through how it works:

The code

Assuming we have array arr with three element, [1, 2, 3], and we want to update index 1, here’s how we could do it:

const arr = [1, 2, 3];
const newArr = Array.from({ ...arr, 1: 3, length: arr.length });
console.log(newArr); // [1, 3, 3]

The explanation

The interesting work happens on line 2, where we call Array.from. If you’re not familiar with Array.from, it takes an object and an optional mapper function and returns an array. It converts the object into an array and optionally calls the mapper function on each element. It can be used to convert both iterable and array-like objects into a plain JavaScript array.

The first thing we see is the spread. Note that this is an object spread, and not an array spread, so we’re spreading the array’s keys into a new object with numeric keys. An array’s keys are own properties, so doing a spread keeps them in the resulting object:

const arr = [1, 2, 3];
const newObj = { ...arr };
console.log(newObj); // {0: 1, 1: 2, 2: 3}

When you spread, you can update keys by placing them after the spread, so we can do the below to update the object with new keys.

const arr = [1, 2, 3];
const newObj = { ...arr, 1: 3 };
console.log(newObj); // {0: 1, 1: 3, 2: 3}

However, if we attempted to pass this into Array.from, it would produce an empty array, because the object is neither iterable nor array-like. According to MDN, "array-like" objects are "objects with a length property and indexed elements." We know the object has numeric keys, but length is not transferred because it’s not enumerable and object spread only transfers enumerable own properties of the object. In order to make the result "array-like," we need to give it the length property explicitly.

The final result again:

const arr = [1, 2, 3];
const newArr = Array.from({ ...arr, 1: 3, length: arr.length });
console.log(newArr); // [1, 3, 3]

Immutable array updates can be annoying. Hopefully this little trick will make them easier for you.

Weekly Links – Week of Nov 4th, 2018

This week, I updated my James Reads site to use Gatsby, powered by a combination of Pocket & the WordPress site that currently resides on that domain. I do a lot of reading on Pocket, and I’ve been meaning to figure out a way to display both Pocket- & WP-saved links there. Initially, that was going to be pulling in my Pocket list into WordPress, but I’m considering moving away from WordPress as Gutenberg controversially lumbers towards a release. In the meantime, spinning up a Gatsby site was really easy and allowed me to decouple the data source from the front-end display of that data, so I can eventually move the data source without needing to rewrite my front-end. If you’re interested, you can see the source here.

Because I’ve now finally got all my readings up in one place, I can start doing what I’ve been meaning to do for a long time: start a weekly link post! I don’t do enough writing, and this seems like a good way to get into a regular habit without having to commit a ton of time to start. So, without further ado, here’s some highlights of what I’ve been reading and thinking over the past week:

GraphQL

We’ve been considering GraphQL at work to solve our data fetching issues. We’ve got a number of charts & graphs that need data from a few different endpoints, and we’re looking at whether providing a GraphQL API would help simplify things. I’m currently a bit hesitant; a lot of the implementations of GraphQL with React use components to declare their data needs, and my current feeling is components are for display/UI and shouldn’t be tied to data fetching. I’ve been using Redux and have been pushing to get as much of that handling out of components and into middleware, so GraphQL seems like a step backwards.

That said, being able to send a single request instead of a half-dozen would be really nice, and it’s possible I’m being too rigid. The PayPal experience was glowing, and certainly made it easier for them to iterate on what they were building compared to the previous REST-y approach. It was also great to see some of the downsides, but most of those downsides are on the back-end, where it definitely increases the complexity. We’d have to add Node to our stack, and while it makes front-end querying easier, making sure the queries work on the back-end could be more difficult.

I’m also still looking to see if anyone is going GraphQL queries in Redux middleware, rather than in the components, but that seems like mostly a "no" so far. If you are, I would love to hear from you!

Functional (or Utility-first) CSS

The other sore spot I’m spending time looking into is our CSS stack. I’ve used styled-components on two projects now, and I can’t say I’m a huge fan at this point. It makes it difficult to visualize the resulting DOM structure, as every element is a styled-component with a name. Former coworkers have reported performance issues with it, although some of that may no longer be an issue in v4. Although this is probably true of most CSS solutions, I’m finding it requires discipline to not reimplement the same styles multiple times. You really need to be aggressive in extracting CSS either into the theme or shared components for reuse.

Some of this is admittedly on us as users, but it feels like a question of what the tech affords you. For these reasons, I’ve been looking hard at Functional CSS as a paradigm going forward. I’m using TailwindCSS on the aforementioned Gatsby site, and part of what I like is how limiting it is. You can write your own CSS, if you must, but you’re not encouraged to do so. Instead, it pushes you to reuse the dozens of CSS classes that already ship with Tailwind. It’s also a lot easier to visualize your HTML, as all the underlying elements are still there, plus you can look at those elements to visualize exactly what CSS is going to be applied. Lastly, the overall design system in then embedded in these minimal number of classes, so you’re limited as to the number of styles you can use at any given time, which enforces more consistency.

It also results in a lot less CSS overall, as each component doesn’t require you to write CSS to style it. I’ve been really excited by how well it has worked on my Gatsby site, and I’ve been looking at whether & how we can apply some of these principles to styled-components, as a complete overhaul is out of the question at this time. Looking at some of those experiences with Functional CSS has been really enlightening.

Voter Disenfranchisement

The midterms were Tuesday, and one of the "memes" that pops up around every election is complaints about the large swath of people who don’t vote. There are, admittedly, some people who explicitly choose not to vote; they believe it doesn’t matter, their vote doesn’t count, both parties are the same, etc. I’m not going to equivocate: those people are wrong–aggressively, stupidly wrong. I remember seeing this comment in one of the lefty groups I’m in: "If voting had the power to change things, they would have taken it away from you." Which is dumb, because they are trying to take it away from you.

On the flip side, those who look down on non-voters generally assume apathy and come with a tone of condescension. The worst part is it doesn’t typically come from an understanding of why people don’t vote, nor does it offer solutions to the real difficulties people have voting.

All of this is on my mind as I read reports from Georgia of 4 hour lines to vote, voting machines locked away unused, and purges of registered voters. So I read the below two articles with interest, especially looking at why young people in particular don’t vote.

The assumption has always been that they don’t care, but the argument Jamelle Bouie makes is the systems are simply not designed to enable individuals with unstable lives to vote. If you move a lot, as young people do, updating your registration every couple of months is a hassle. If you need an ID to update said registration, now there’s another barrier to getting there. If you don’t have access to a car or public transit, getting to the locations to get either of these things becomes another barrier.

This doesn’t just apply to young people either, but to anyone living unstable lives, which are often poor or minorities. Voting takes place on a Tuesday, so voters have to take off work to vote (especially if they have to stand in a 4hr line to do so), and many states don’t have early voting (like my home state, New York, which has abysmally low turnout) or allow vote-by-mail. On top of all that, add the explicit barriers to voting, such as voter ID laws (in TX, you can use your gun or military license to register but not your student or employer ID) and closed polling locations, and you end up with a system that both passively and actively makes it difficult for people to vote.

So when I hear people complain about non-voters, I’m not hearing solutions besides "try harder." We as a culture love to blame individuals for systemic problems, and if you’re actually interested in getting people out to vote, we need to focus on the barriers to voting instead of castigating individuals for not climbing over them.

Maybe if voting didn’t suck, more people would vote? Just a thought…

The Roots team invited me to write a blog post about WP-Gistpen hitting 1.0 (which it finally did recently!). I provide a quick overview of why I built the plugin and what it does. Check it out!

This post is part of the thread: Project: WP-Gistpen - an ongoing story on this site. View the thread timeline for more context on this post.

Big changes afoot in the React/Redux ecosystem

If you’re using React and / or Redux, you should be aware of two major changes coming soon in each of those libraries.

First, Redux just released v4.0.0-beta.1. There doesn’t appear to be any major changes breaking changes unless you were using some of the types Redux is no longer exporting. There are also some additional checks and errors around dispatching too early in middleware, so it should solve a common pitfall when setting up middleware. It’s a problem I’ve experienced a few times when using brookjs and it’s why we recommend dispatching an INIT action after the application is bootstrapped.

In addition to the upcoming change in Redux, React has seen some major changes as well. First, the new Context API was proposed and landed, the first major change using React’s new RFC process. The Context API has always been considered somewhat experimental, although it’s been used widely by a number of libraries, including react-redux and react-router, and the current implementation ran into a number of challenges. The biggest is shouldComponentUpdate will tell React that none of the elements of a given hierarchy has changed. If a component in that hierarchy would change as a result of a change in context, that change isn’t able to propagate down the tree.

The new API uses higher-order components to set and provide a Context. It uses a render function as a prop to provide the value of the Context, giving the context Provider control over when its dependents render. It’s currently behind a feature flag, which means it may not be available to you in your regular applications just yet. Once it comes out from behind that flag, you’ll be able to use Context in your applications, knowing that this is a stable API you can rely on.

More importantly, though, React continues pushing towards async-rendering by deprecating all of the componentWill* lifecycle methods. The reason for this major change is they’ve found these lifecycle methods could be potentially unsafe in an async world, so they’re suggesting moving most of the logic previously implemented in the methods to either componentDid* or the render method itself. They’ll be introducing new versions of these methods prefixed with UNSAFE_*, so it’s very clear that these methods could cause problems in an async world.

One of the major use cases for componentWillMount in particular is to run logic on the server, as componentDidMount never runs on the server. They’ll be introducing a separate lifecycle hook for server-rendering only where that logic could live. Otherwise, any logic that currently lives in componentWill* should move to either the corresponding componentDid* or render itself.

This is going to have a major impact on the community, Facebook’s "move fast and break things" applied to open source, but the overall goal is laudable. React is ultimately moving towards an async-rendering world, and while the initial Fiber implementation makes async rendering possible, more work needs to be done in order to fully enable it. Unfortunately, it looks like there’s still a lot more upheaval in the ecosystem to come before we get there.

A codemod is planned for application developers, so it should (hopefully) be less painful for apps to make the switch. Lbirary authors are likely to be hit hardest. I’m already looking at what changes are required in order to get brookjs working with async rendering, as we definitely use some of the now-deprecated lifecycle hooks. We’ll see if this turns out to be difficult.

“It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.”

Nathaniel Borenstein

Arrow functions are not the solution you’ve been looking for

JavaScript’s Arrow functions were supposed to solve all our this-related problems but instead just replaced those this-related problems with other this-related problems.

A friend of mine posted this in our local Slack channel, and I’ve seen a variation of this problem a number of times already:

function foo(ddb) {
  return {
    listTables: (params = {}, cb = idFunc) => {
      const self = this
      let currentList = params.currentList || []
      
      return toPromise(ddb, ddb.listTables, omit(params, 'currentList'), cb)
        .then(r => {
          console.log('LISTTABLES result', r)
          currentList = currentList.concat(r.TableNames || [])
          
          if (!r.LastEvaluatedTableName || r.TableNames.length === 0) {
            return { ...r, TableNames: currentList }
          }

          return self.listTables({   // <- Fails here
              ...params,
              currentList,
              ExclusiveStartTableName: r.LastEvaluatedTableName,
            }, cb)
        })
    }
  }
}

Note the <- Fails here. Can you spot why? I’ll wait…

Figure it out…

…yet?

Ok, I’ll tell you. this inside of listTables is lexically-bound, so it’s the same this as foo, not the returned object. So if the function is called in global scope, which it likely is, this === window, or even this === undefined, depending on whether we’re in strict mode.

We’re just moving our problems around, and we’re even getting to the point of introducing more syntax to solve the problem arrow functions were supposed to solve in the first place (see the new class fields proposal, which will allow you to write this:

class MyClass {
  myMethod = () => {
    // ...code
  }
}

and the function stays bound to the class instance. None of this really solves the underlying problem, which is the repeated attempt to shoehorn patterns into the language that don’t belong.

JavaScript is not a traditional class-oriented language. Stop trying to make it one.

I think my favorite thing about Webassembly is the possibility of being able to write both the front- and back-end in a language other than JavaScript. Node is great, but sometimes it’s not the right choice for a particular use case, and being able to choose a language other than JavaScript and still get the kind of isomorphism you get running a V8 instance on a server is amazing.

I also really want to use it as an opportunity to learn another language. If Rust can compile to Webassembly and work in the browser, I can learn Rust, and learn it easier because I can apply it in an area that I already have a lot of experience. I don’t think I’m the only one for whom this is true, and I think that’s awesome.