Why use TypeScript unknown for API response types

I recently extracted and released kefir-ajax, a fetch-like ajax library for Kefir. While I wrote the library in plain JavaScript, the library generates TypeScript typings from its JSDocs. As you can see here, the json method of the ObsResponse class returns TypeScript unknown, rather than the much more common any. Using unknown instead of any here in TypeScript may make the API more awkward to use, so I wanted to explain why I made this decision.

TypeScript unknown makes the API more reliable

While this may make the API slightly more awkward, it provides a much sounder typing for your APIs. With any, you have no guarantee that any of those API values are what TypeScript thinks they are. This pushes your errors away from their source, as they start appearing when you rely on guarantees you don’t have. This shows up in your error logs as property of 'undefined' errors and can be very difficult to debug. API responses aren’t the first place you will look unless they’re really close to the location of the error.

If the response body is unknown, TypeScript either forces you to validate it before you can do anything with it or explicitly cast it to any. For the former case, you can use a library like io-ts or runtypes to validate your unknown types to concrete ones (I’m currently using the former). Both of these packages accept an unknown API response and provide either a validated, strictly typed API response, or an error, which you can then handle as you choose.

What if you don’t want to validate?

If neither of those work for you, you can cast to any and get on with your life. You’ve explicitly opted into any, rather than having kefir-ajax introduce that for you. Later on, when you decide to introduce strict type validation to you API responses, you can easily find where it’s needed. Search your codebase for any and wrap those with your new API validations.

Kefir in particular makes this really nice, because you can push any errors down Kefir’s error channel:

ajax$(...)
  .flatMap(response => response.json())
  .flatMap(body => // body is `unknown`
    ResponseType.decode(body).fold<Observable<ResponseType, t.Error>>(
      Kefir.constantError,
      Kefir.constant,
    )
  )

The error type gets added to the downstream type, and TypeScript warns you if you haven’t handled it, t.Error. TypeScript’s compiler helps you introduce the validation without adding new bugs at the same time.

But even if you don’t use Kefir, or kefir-ajax, use unknown for your API responses then validate them. This helps you build trust in what TypeScript is telling you and keeps your errors closer to their cause.

React Testing Tip: Reduce test duplication with a component render function

Whenever you write new tests for your React components, you’ll probably find your tests reusing the same interactions as tests you’ve already written. In multiple tests, you’ll click a button or change an input and assert that the React component you’re testing updates as expected. This comes up often enough when testing that I like to start my tests by making those interactions reusable. Let’s look at how we can do that.

Basic Tests

For testing React component, I use Jest & @testing-library/react. For this example, we’re going to be testing this basic form:

import React from 'react';

export const Form = ({ submit }) => {
  const [value, setValue] = useState('');

  const onSubmit = (e) => {
    e.preventDefault();
    if (value === '') return;
    submit(value);
  }

  return (
    <form onSubmit={onSubmit}>
      <label htmlFor="input">Type</form>
      <input id="input" value={value} onChange={e => setValue(e.target.value)} />
      <button>Submit</button>
    </form>
  );
};

We need to write at least 2 tests for this. First, we’ll attempt to submit the form immediately and check that it fails because the form does not have a value yet. Then we’ll test changing the value in the form and then submit it, and it should succeed with the value. Let’s take a look:

describe('Form', () => {
  it('should not submit the form with an empty value', () => {
    const submit = jest.fn();
    const { getByText } = render(<Form submit={submit} />);

    fireEvent.click(getByText('Submit'));

    expect(submit).toHaveBeenCalledTimes(0);
  });

  it('should submit the form with value', () => {
    const submit = jest.fn();
    const value = 'a value';
    const { getByText, getByLabelText } = render(<Form submit={submit} />);

    fireEvent.change(getByLabelText('Type'), { target: { value } });
    fireEvent.click(getByText('Submit'));

    expect(submit).toHaveBeenCalledTimes(1);
    expect(submit).toHaveBeenCalledWith(value);
  });
});

In both tests, we create a mock function with jest.fn() to provide to the rendered Form component. In the first test, we assert that this mock function has not been called, as we don’t want an empty value sent to the submit function. In the second test, we first change the value in the form field then submit the form. This time, we succeed, as we have a valid value in the form.

Writing a render function

These are small tests, but we already see some duplication. The button is queried with the same code twice, the change call is verbose, and render is the same in both tests. All of this would be more readable if we had reusable functions for all of it. Let’s create a renderForm function that will reduce this duplication:

const renderForm = props => {
  const { getByText, getByLabelText } = render(<Form {...props} />);

  const getButton = () => getByText('Submit');
  const getInput = () => getByLabelText('Type');

  const fireButtonClick = () => fireEvent.click(getButton());
  const fireInputChange = value => fireEvent.change(getInput(), { target: { value } });

  return { getButton, getInput, fireButtonClick, fireInputChange };
};

All of the repeated logic is bundled up in named functions and we can use it like this:

describe('Form', () => {
  it('should not submit the form with an empty value', () => {
    const submit = jest.fn();
    const { fireButtonClick } = renderForm({ submit });

    fireButtonClick();

    expect(submit).toHaveBeenCalledTimes(0);
  });

  it('should submit the form with value', () => {
    const submit = jest.fn();
    const value = 'a value';
    const { fireInputChange, fireButtonClick } = renderForm({ submit });

    fireInputChange(value);
    fireButtonClick();

    expect(submit).toHaveBeenCalledTimes(1);
    expect(submit).toHaveBeenCalledWith(value);
  });
});

These tests provide a much clearer explanation of what is supposed to happen, and we could easily add a third test if we wanted to confirm changing back to an empty string still results in submit not being called:

it('should not submit the form if value changed to empty string', () => {
  const submit = jest.fn();
  const value = 'a value';
  const { fireInputChange, fireButtonClick } = renderForm({ submit });

  fireInputChange(value);
  fireInputChange('');
  fireButtonClick();

  expect(submit).toHaveBeenCalledTimes(0);
});

Now we’re really starting to see the benefits of this render function! Less code duplication, as we can reuse these fire functions, but more importantly, if the way we need to query the element changes, we only have to change it one place. If we decided to change the label on the input field (which we should, because "Type" isn’t very descriptive), we only have to change it in the renderForm function.

Introducing react-testing-kit

Because this pattern has been so useful to me, I created a package to make creating them easier called react-testing-kit. Let’s take a look at how we can simplify this code with it:

const renderForm = createRender({
  defaultProps: () => ({ submit: jest.fn() }),
  component: Form,
  elements: queries => ({
    button: () => queries.getByText('Submit'),
    input: () => queries.getByLabelText('Type'),
  }),
  fire: elements => ({
    buttonClick: () => fireEvent.click(elements.button()),
    inputChange: value => fireEvent.change(elements.input(), { target: { value } }),
  }),
});

RTK passes elements created in that function to the fire function and returns the result of all of those functions when you create a new instance. Let’s update the last test to use this function:

it('should not submit the form if value changed to empty string', () => {
  const value = 'a value';
  const { fire, props } = renderForm();

  fire.inputChange(value);
  fire.inputChange('');
  fire.buttonClick();

  expect(props.submit).toHaveBeenCalledTimes(0);
});

Very similar but less boilerplate.

Conclusion

We can simplify all of our tests with react-testing-kit. If this looks like it would improve your code, check out the project and let me know what you think!

I think not using snapshots makes the test-as-documentation much clearer. It explains what the developer was attempting to check with a given snapshot vs a more hand-wavey "it looks like this." This is particularly important if you want to assert snapshots in the middle of your tests / after changes, because then it’s a lot less clear what’s actually important. If you have a form that’s supposed to display error messages after attempting to submit invalid values. asserting against those messages specifically is much clearer than a snapshot.

One of the positives of "deep" snapshot testing is if a widely used component changes, all of the snapshot tests from all of the components that use it will also fail, which tells you which components to look at when checking to see if it still displays correctly.

pipe-dom: DOM manipulation with the F#-style pipeline operator

Last week, Babel released version 7.5.0, which included our implementation of the F#-style pipeline operator. If you’re not already aware, TC39 is exploring the potential of a pipeline operator in JavaScript. You can learn more about what’s going on with the operator here. At this point, we’ve got all 3 proposals in Babel, so the next step is to start getting feedback on them. To help with that, I’ve put together a small DOM manipulation library, pipe-dom, for the newly-released F#-style pipeline operator.

When new developers start learning JavaScript, they often start with jQuery. One of jQuery’s defining characteristics is its fluent API, which allows you to write clear, concise code when making a series of DOM modifications. The major downside to this API style is all of these methods need to be attached to the jQuery object, which means the entire library needs to be loaded to be usable. Minified and gzipped, jQuery is ~30KBs, which is a lot to include if you’re just trying to toggle classes.

With the introduction of modules to JavaScript, bundlers are able to analyze what’s used in a project and remove unused functions, a process called tree-shaking. pipe-dom takes jQuery’s fluent API and combines it with the pipeline operator, allowing users to import the exact DOM methods they want and let bundlers remove the rest. Here’s what that might look like:

import { query, addClass, append, on } from 'pipe-dom';

query('ul')
  |> addClass('important-list')
  |> append(
    document.createElement('li')
      |> addClass('important-item')
      |> on('click', () => console.log('item clicked'))
)

With this, your bundler can include just the code for query, addClass, append and on, and discard the rest of the library.

I’ve included a small API initially to get the idea out there, but I’m very interested in expanding it, so please open an issue and suggest ideas! I’m very open to expanding the API and evolving the library along with pipeline operator best practices.

Check it out and let me know what you think!

Using TypeScript tagged unions for loading state in Redux

Dealing with loading state is a core requirement of most apps you build. Every app needs data, that data almost always needs to be loaded from somewhere, so you need to manage your loading state. Redux doesn’t provide any particular structure for this, but combining it with TypeScript enables some useful patterns. Let’s take a look at a few ways of handling it, some downsides to those approaches, and conclude with an approach I’ve used successfully on a few projects.

We’ll be using TypeScript throughout the examples, but much of the concepts here are useful without the types. The TypeScript-specific content comes towards the end, so I would encourage JavaScript-only developers to read through to the end. Even if you don’t use TS, the concepts applied with it can be applied with JS as well.

Naive Approach

The most common structure you’ll see for this looks like this:

type State = {
  items: Item[];
};

const defaultState: State = {
  items: []
};

This seems pretty simple; start with an empty array, add additional items to the items keys when you fetch them, and you simplify the checks in your views. But we’ve already got a problem: there’s no distinction between "haven’t loaded any items" and "successfully loaded no items". This can work for some apps, if they’re really simple or they die on load failure (like a CLI app), but for most of your typical web apps, this is going to be a problem.

So let’s toss in a null instead to indicate that the items haven’t been loaded yet:

ID provided is not a Gistpen repo.

So now we can tell the difference between whether things are loaded or not: if state.items === null, they’ve been loaded. So far so good.

But what happens if the server errors? We can’t represent an error state with this setup. How do we do that?

Handling error & loading states

We could solve this by adding an error key to the state:

type State = {
  items: Item[] | null;
  error: Error | null;
};

const initialState: State = {
  items: null,
  error: null
};

const successState: State = {
  items: response.data,
  error: null
};

const errorState: State = {
  items: null,
  error: response.error
};

This allows us to represent the initial, loaded & error states, with the examples above expressing those possibilities. It is a bit onerous to derive those states though. You have to check both of the values in order to figure out where you’re at, because at the "unloaded" step, both are null. A conditional check could look like this:

if (state.items === null && state.error === null) {
  return <UnloadedView />;
}

if (state.items !== null) {
  return <LoadedView items={state.items} />;
}

// We can assume we're in the error state now
return <ErrorView error={state.error} />;

There are various ways of structuring this conditional, and all of them are variously ugly in their own particular way. You could extract these conditionals to functions, which would at least give them readable names.

However, this sate can’t tell us whether the API request has started or not. If this is a case where the API request starts immediately, then the difference is immaterial to the view. But if you need this information, you could add another property to indicate whether the request is loading or not:

type State = {
  loading: boolean;
  items: Item[] | null;
  error: Error | null;
};

const defaultState: State = {
  loading: false,
  items: null,
  error: null
};

And the conditional complicates accordingly:

if (state.loading) {
  return <LoadingView />;
}

if (state.items === null && state.error === null) {
  return <UnloadedView />;
}

if (state.items !== null) {
  return <LoadedView items={state.items} />;
}

return <ErrorView error={state.error} />;

Unionize!

But let’s take a step back: we’re really trying to represent various states of the API request by checking the effects of the requests. Instead, we should just represent the current loading state explicitly:

type State = {
  status: 'unloaded' | 'loading' | 'loaded' | 'error';
  items: Item[] | null;
  error: Error | null;
};

const defaultState: State = {
  status: 'unloaded',
  items: null,
  error: null
};

Now, we have all of our states represented by string, indicating exactly what state the API is in. The conditional itself gets simplified as well: we can now use a switch statement to exhaust all possible states:

switch (state.status) {
  case 'unloaded':
    return <UnloadedView />;
  case 'loading':
    return <LoadingView />;
  case 'loaded':
    return <LoadedView items={state.items} />;
  case 'error':
    return <ErrorView error={state.error} />;
}

Now we’re talking! There’s a very clear mapping between the various states and their related views, and you know that if you’re in an error state or a loaded state, what data is available to you: error state always has an error object, loaded state always has the array of items, and the loaded view itself could display a "no items found" if the array is empty.

This approach is easily extensible as well. You can add 'reloading' and 'reload-error' states, in case you need to refresh data while displaying the stale data at the same time. It’s much more powerful and flexible than adding random keys and hoping you can continue to figure out what’s happening based on the data you have.

In JavaScript, there isn’t much more to be done. Since you don’t have to write up the types in any meaningful way, you know that when status === 'loaded', state.items is the array of items, and you could move on. But if you’re using TypeScript, you’ll need to represent that relationship in the type system. In fact, the above example will error in TypeScript, as state.items could be null. We can solve this with tagged unions.

Tagged unions

Let’s start by looking at tagged unions. A tagged union allows us to combine two types and discriminate between them with a tagged property.

type First = {
  tag: 'first';
  prop: number;
};

type Second = {
  tag: 'second';
  value: string;
}

type FirstOrSecond = First | Second;

declare function getFirstOrSecond(): FirstOrSecond;

const thing: FirstOrSecond = getFirstOrSecond();

switch (thing.tag) {
  case 'first':
    // TypeScript knows prop exists and is a number.
    // Accessing thing.value would cause an error.
    return thing.prop + 1;
  case 'second':
    // And value exists and is a string here
    return thing.value + ' came in second';
}

We’ve created a union type, FirstOrSecond, from two types with an overlapping property, tag. The types can have any additional properties they’d like, as long as there’s one overlapping property, with a constant of some kind, that TypeScript can use to discriminate between the types. Actions in Redux, with their type property, are another common example of this, and typesafe-actions makes it easy to implement in that case.

However, this does not work with random properties. It’s a common complaint people have: if you have a union type where the dependent types have no overlapping properties, you can’t check for the existence of one of those properties to determine which type you’re looking at it. This does not work:

type First = {
  prop: number;
};

type Second = {
  value: string;
};

type FirstOrSecond = First | Second;

declare function getFirstOrSecond(): FirstOrSecond;

const thing: FirstOrSecond = getFirstOrSecond();

// TypeScript complains about Second not having a `prop` property.
if (typeof thing.prop === 'number') {
  // TypeScript does not know you have a First
  // without an explicit cast
  const prop = (thing as First).prop;
} else {
  const value = (thing as Second).value;
}

Now that we understand what a tagged union type is, we can use this concept to tag our various loading state, and their associated properties:

type UnloadedState = {
  status: 'unloaded';
};

type LoadingState = {
  status: 'loading';
};

type LoadedState = {
  status: 'loaded';
  items: Item[];
};

type ErrorState = {
  status: 'error';
  error: Error;
};

type State = UnloadedState | LoadingState | LoadedState | ErrorState;

Now, in the switch statement above, we know, at the type level, what properties are available to us. No more null checking or ugly cast throughs–just a clean description of the various states and their associated and known-to-be-present properties.

Conclusion

Next time you need to implement a data loading scheme, start off with a version of this, and it’ll make your data much easier to extend over time.

What yo-yos taught me about being a developer

When I was in 4th grade, a yo-yo fad passed through my middle school. Because they were cheap, we all got one, and we were obsessed. We debated which brands were the best, which yo-yo style worked well, and how to do tricks. This was all pre-Internet and entirely word of mouth. We’d hang out in little groups at school, showing off what we’d learned and teaching each other.

This past Easter, my mom made us Easter baskets of goodies for our family. Besides the normal varieties of sweets–mostly chocolate, including some British (!) candy bars (omg Lion)–in the basket was… a yo-yo. I pulled it out, wound it up, and dove into several of the tricks I had learned when I was younger: Walk the Dog, Cat’s Cradle, and the Boomerang (don’t quote me on these names). It felt good to flex those old muscles, and impressed my family at the same time 😎.

I started my career in social media and spent four years working at companies where I was mostly the only person doing social media marketing, and I didn’t work with people who knew more than I did (and I did not know much). While I had a lot of freedom to do what I wanted (within reason), I missed out on a ton of learning. No one told me I was doing something wrong, or dumb, or what I could be doing way better, or more of, or whatever.

When I transitioned to web development, I worked at a company where I was surrounded by people smarter than me; who knew more than me; who could teach me things I didn’t know; who could answer questions I had; who I could debate with. While I dedicated time to self-learning, my most important learning experiences were the ones I got from other developers.

Years later, despite having not picked up a yo-yo since middle school, those tricks were still fresh in my mind. The learning process was social–your friend stood there and taught you the trick, highlighting what you’re doing wrong and correcting mistakes, until you’ve finally got it, and it worked so well I never lost those skills.

Learning development is no different. You can read all the books you want, but the feedback loop of regular review of and conversations around your code can accelerate the process–no book is going to tell you implemented its pattern wrong! We have a reputation for being quiet loners, but learning is a social process. Be social! I am eternally grateful both to my colleagues and my communities for everything they’ve taught me. They will be an invaluable resource to you.

Weekly Links – Nov. 25th, 2018

Transducers in JavaScript

Array#reduce is one of those things that can be difficult to develop an intuition for, but if you can, what makes it powerful is how reducers (the function you pass to it) can be composed together. It’s an idea that I keep reading about in an attempt to get my head around it, but I only really see glimpses of what makes them great. Eric Elliott gives me another glimpse of them.

TypeScript: Was it worth it?

Probably one of the first articles that doesn’t conform to Betterridge’s Law, this look at TypeScript actually finds the past few years with it have been great. I’m increasingly coming around to seeing it as useful in my current project; the only issues to overcome will be configuring it to work with our libraries and learning it. Both of those are hurdles that are worth going through to get TypeScript working.

Company Culture

As I’m looking at hiring my first front-end developer, I’m also thinking critically about what our culture is. The kind of culture that makes Asana one of the best places to work is something I’d like to draw from. A big part of making that work is getting consistent feedback and adjusting to it. Dennis Plucinik, who I had the pleasure of discussing this with on Friday, also wrote about building a team, which centers around respect: respecting their time, autonomy, and goals. Both have given me something to think about as we wrap the initial build & move into maintenance.

Weekly Links – Nov. 18th, 2018

More Functional CSS

I’ve been rebuilding my site into Gatsby with Tailwind, and I’ve really been quite enjoying it so far. The limitations it imposes force you to limit the amount of CSS you have to write, so I’ve been intrigued to see more articles pop up about it. This article from CSS-Tricks explores whether you could combine Functional CSS with a more traditional CSS approach. While I found the article interesting, I had one minor quibble:

Secondly, a lot of CSS property/value pairs are written in relation to one another. Say, for example, position: relative and position: absolute. In our stylesheets, I want to be able to see these dependencies and I believe it’s harder to do that with functional CSS. CSS often depends on other bits of CSS and it’s important to see those connections with comments or groupings of properties/values.

I actually find this to be an advantage for Functional CSS. I like that I have the classes absolute & relative in my HTML, where it’s very clear where they are in relationship to each other.

I still need more experience with it, so we’ll see how it works as I finish up my site.

Pipelines in JavaScript

If you know me, you know I’ve been working on bringing the Pipeline Operator to JavaScript. We’re currently working on implementing the operator in the Babel parser, so things have stalled out while that work is underway. Despite that, enthusiasm in the community remains high, but with several proposals in competition, it can be difficult to keep an eye on what’s going on. That’s why I was really excited to see LogRocket write about the proposal and nail all the details. Definitely check that out if you’re wondering what the latest is.

Wow, Facebook

The other big news out of the past week is Facebook’s execs have something pretty messed up, hiring a firm who smeared its critics with both anti-Semitic conspiracy theories as well as charges of anti-Semitism. Obviously, the moment that was published, they cut ties with said firm, but the damage is already done. Not only have they been embroiled in controversy for a few years now, they have completely bungled every response to their problems. The irony of Facebook, the best platform for conspiracy theories, spreading its own conspiracy theories is too much.

Weekly links – Week of Nov 11th, 2018

Unit Testing

When you’re working at a startup, we’re building out new features so fast that we’ve not-irregularly introduced bugs to already-complete parts. We don’t have a dedicated QA team and few tests, and we’re looking to get some backstopping going so we can continue to ship with confidence.

While I’m looking at eventually integrating E2E testing with Cypress, I’ve been reading about unit testing to see how they could help us. Interestingly enough, I’m not sure we would. The errors we get are triggered by a series of steps that we probably wouldn’t reproduce in unit tests, so they wouldn’t help prevent these issues.

We could do some integration-type testing, bootstrapping a full or mocked store and dispatching a series of actions to see what results before we get to a full E2E integration, but it feels like unit testing will not be that helpful unless we can unit test large chunks like an entire container.

The articles this week also argue they not only don’t provide a lot of coverage but make it difficult for your application to change. I agree with this to the extent that your architecture is still changing. As you settle into it, you can start to capture the corner cases in your tests in ways that allow it expand its functionality without breaking what exists. That does mean it’s not useful to us yet.

Amazon is Coming to Queens

The big news this week was the leak that Amazon had decided on two cities for it’s new HQ2(.1/.2?): Arlington & Long Island City, Queens, in New York City. Along with this announcement, we discovered the tax incentives for Amazon coming to Queens could top $3 billion dollars. I’ve read a couple numbers, and the totals depend on how you calculate the incentives, but even the lower end is at least $1 billion.

This came out on the evening before the midterms, while all eyes were on the results of the election, but even with this, we’re already seeing a pretty strong reaction to the news. The process through which Amazon chose which city plays cities off each other, and there have been a couple of calls to make it illegal.

More importantly, it’s not even clear the city will benefit enough to offset the amount of money it’s giving away. The last article below, from the conservative Washington Examiner, goes through the data on these sorts of tax breaks and argues they’re not beneficial, as they don’t factor in to a company’s location planning (Amazon would have chosen New York City anyway) and the city will benefit more from their move if they don’t give away almost $3 billion in the process.

It’s weird to see a conservative publication agree with a socialist, but there’s a shared recognition that this does not benefit the city. There’s still time to fight this, but not much, so let’s get moving.