James DiGioia

my little web home

Day of Oct 24th, 2020

  • The Working Families Party Created This Mess

    While most of us were transfixed by the various hyperreal absurdities on our national stage—Donald Trump testing positive for COVID-19, Donald Trump threatening to not accept the results of a presidential election—a much smaller but still notable political event had transpired in New York.

    Read at 05:07 pm, Oct 24th

  • If the Left Wants to Win Elections, It Should Heed the Lessons of This Progressive Third Party

    Eight months into Don­ald Trump’s pres­i­den­cy — and with the 2018 elec­tions fast approach­ing — the ques­tion of how the Left should engage with elec­toral pol­i­tics is again being hot­ly debat­ed.

    Read at 03:45 pm, Oct 24th

  • EXCLUSIVE: Fox News Passed on Hunter Biden Laptop Story Over Credibility Concerns

    Mediaite has learned that Fox News was first approached by Rudy Giuliani to report on a tranche of files alleged to have come from Hunter Biden’s unclaimed laptop left at a Delaware computer repair shop, but that the news division chose not to run the story unless or until the sourcing and veracit

    Read at 02:41 pm, Oct 24th

  • Does Cuomo Share Blame for 6,200 Virus Deaths in N.Y. Nursing Homes?

    The death toll inside New York’s nursing homes is perhaps one of the most tragic facets of the coronavirus pandemic: More than 6,400 residents have died in the state’s nursing homes and long-term care facilities, representing more than one-tenth of the reported deaths in such facilities across t

    Read at 02:27 pm, Oct 24th

  • New Study Finds No Direct Link Between Subway & COVID-19 Spread

    Subway and other mass transit use is dramatically down since New York first hit PAUSE to slow the spread of COVID-19. In March, subway ridership was estimated to be down around 90% from normal levels, and in September, it was hovering around 60-70% lower than pre-pandemic times.

    Read at 02:18 pm, Oct 24th

Day of Oct 23rd, 2020

    Day of Oct 22nd, 2020

    • Activists Build Facial Recognition to ID Cops Who Hide Their Badges

      In order to hold police accountable when they try to hide their identities, a growing number of activists are developing facial recognition tools that identify cops, The New York Times reports — a striking inversion of the way cops tend to use facial recognition on protestors and suspects.

      Read at 01:39 pm, Oct 22nd

    • The Political Education of Killer Mike

      How Michael Render—a rapper from Atlanta who also happens to be a Second Amendment–loving, Bernie Sanders–boosting, unapologetically pro-Black businessman—became one of the loudest and most original political voices in the country.

      Read at 03:40 am, Oct 22nd

    • Tagged unions in JavaScript

      Tagged unions in JavaScript 2020-10-21 # Why tagged unions? Redux. MobX. XState. Vuex. RxJS. State management is hard, and developers are always looking for a tool to help them. Tagged unions are a programming pattern that you can use with immutable state libraries, or even on their own. Tagged unions make it possible to visualize all the states your application can be in, and prevent you from accessing the wrong data at the wrong time. Note: Tagged unions are also called “algebraic data types”, “variants”, “sum types”, “discriminated unions”, or “enums” in different programming languages. # What to expect This post covers the following topics in order: “Classic” state management (large objects with every property at once, but lots of null values) Tagged union state management (objects with a string “tag”, and only relevant properties are present) Excerpts from a real life example with around 8,000 lines of code Several appendixes to read based on your own curiosity # Pizza app: classic style For the purposes of this blog post, I’m going to use a small React UI as an example. Tagged unions work well with many libraries (and without any libraries), and even with many other programming langauges. Let’s look at an example React app for online pizza delivery. You’ve probably worked with state like this before: there’s a couple boolean properties controlling what mode you’re in, there’s a property that might be null, and there’s state (size, style, toppings) that’s not always relevant (I’ll take a large pepperoni with errors). First up, we check if state.error is not null. If so, we show the error screen. Even though you probably won’t see this much, it has to be the first if statement. After all, if there’s an error, it’s definitely the most important thing to show. Next up, we have to check outForDelivery before orderReceived. After all, your order is still technically received while your pizza is out for delivery, so that screen should take priority. If your order has been received, we should show a screen letting you know that, rather than staying on the order form. Finally, we have a bunch of form logic for updating the pizza information before we place our order. The order of these if statements is critical to this component working correctly. error, orderReceived, and outForDelivery are all trying to tell us which screen to show, but we have to resort to a hierarchy when they conflict with each other. With tagged unions, we pick one property (the “tag”) to be in charge of which screen to show, and we only keep track of the properties related to the current screen. # Pizza app: tagged unions The key difference here is this mode property with 4 different string possibilities. This mode is in charge of what screen to show. We’ve listed which values are allowed, and there’s nothing to second guess. It’s not possible to have a confusing state like “order received” AND “out for delivery” AND “error” in this system. If you wanted to keep track of a complicated state like that, you would make a new mode like delivery-error (we can assume order is received if the order is out for delivery, so it doesn’t need to be received-delivery-error). Now that we have a single source of truth on the current mode, we can write the if statements in any order. I’m choosing to put ordering first since it’s the first step in the user workflow. For the next if statement, we can use the 2nd step in the workflow. You can see that this time around, we did not have to use the “updater function” style of setState. When transitioning from one mode to another, you’ll usually want a full new object from scratch, since most modes don’t share properties with each other. Again we use the next step of the workflow, and we use a full new object from scratch with setState, since we are transitioning modes. Last but not least, we check for the error state. Notice how we can grab state.error just like before, but this time we’ve checked state.mode first to make sure it makes sense to do that. With tagged union code, you should always check state.mode before attempting to use properties that only exist on certain modes. # A real life example Small examples are all well and good for learning, but how does this work on large apps? At my current job, I refactored a large portion of our most complicated screen (8,000+ lines of TypeScript) to use tagged unions to store most of the state. The rest of the team agreed the code was easier to reason about, and we now have lots of errors TypeScript can catch automatically for us. That application has 30 (!) different modes within its tagged union. In order to help make sense of these, we arranged them hierarchically, similar to URLs. export type MapMode = | MapPlacemarkEditMode | MapPlacemarkBrowseMode | MapBeaconsBrowseMode; export type MapPlacemarkEditMode = | { mode: "placemark/edit/area"; areaPlacemark: AreaPlacemark } | { mode: "placemark/edit/label"; labelPlacemark: LabelPlacemark } | { mode: "placemark/edit/regular"; placemark: Placemark }; export type MapPlacemarkBrowseMode = { mode: "placemark/browse"; placemarks: Placemark[]; }; export type MapBeaconsBrowseMode = { mode: "beacons/browse"; beacons: Beacon[]; }; This way you can narrow down your modes to be more specific. For example, a form component that lets you edit a placemark can take in mode: MapPlacemarkEditMode so that it’s only possible to render the form when in those 3 modes. This also means that the form code can be simplified since it only needs to check 3 different modes internally. I’ll admit that it was a little tricky hooking these modes up to URLs within the browser. We wrote some code so that when you updated the mode, we automatically set the route using React Router to the URL that most closely matches your current state. Of course, URLs can’t preserve all the same state as these JavaScript objects, but people expect to lose unsaved changes when they refresh the browser anyway. # Conclusion Tagged unions can be used to model all your possible application states. They can be used in pure JavaScript without any libraries. They are even stronger in TypeScript where mistakes can be caught before running your program. And most importantly, they can reduce the confusion about what state your application is in. # Further reading I have included several appendixes containing more things to learn about. Want to use tagged unions, but still using React class components? Learn about the React class component gotchas first. Love TypeScript? Add type safety with TypeScript. Not interested in TypeScript? Fortify your tagged unions using this one little helper functions (developers love it!) Need to add “Undo” to your app? Add “Undo” in one line of code. Do you use Vue? Check out tagged unions in Vue. Want to use tagged unions for more than just application state? Tagged unions are great for data modeling. If you’re still itching to learn more, try searching for algebraic data types (the more popular term compared with “tagged union”), and sum types. Many of these results use Haskell or other functional programming languages for their code examples. # Appendix: React class component gotchas If you are using React, be careful with this.setState, the state management method for class components. React’s this.setState merges its parameter into the current state, so it is not suitable for use with tagged unions, which need to be able to add/remove properties. If you have to use this.setState, you can nest your tagged union state within an object like this: For TypeScript, you should make a tagged union type instead, which can catch your type errors at compile time, before your code is even run: type State = | { mode: "loading" } | { mode: "error"; message: string } | { mode: "success"; flavors: string[] }; let state: State = { mode: "loading" }; state.flavors; if (state.mode === "loading") { console.log("Loading..."); } else if (state.mode === "error") { console.error(state.message); } else { console.log(state.flavors.join(", ")); } You can even take it one step further in TypeScript with something called exhaustiveness checking. If you add another case to your State type, TypeScript will emit a type error until you fix your code to support that newly added case. This means that you can automatically find most of the code you need to update when adding new modes. function assertNever(value: never): never { throw new Error(`assertNever: ${value}`); } type State = | { mode: "apple pie" } | { mode: "banana split" } | { mode: "cherry turnover" }; let state: State = "apple pie"; switch (state.mode) { case "apple pie": console.log(1); break; case "banana split": console.log(2); break; case "cherry turnover": console.log(3); break; default: assertNever(state); break; } Now if you update the State type with a 4th mode dark chocolate, you’ll get a TypeScript error on your assertNever call, saying: Argument of type `{ mode: "dark chocolate"; }` is not assignable to parameter of type 'never'. The error message looks a bit weird, but you’ll get used to it. You can go to all the places in your code that produce errors like this and fix them. You might actually have a fully functioning app that responds to your new state afterward. When using tagged unions, you might enjoy this helper function if you’re using JavaScript instead of TypeScript. Strict objects throw errors when you try to access properties that don’t exist, something that happens a lot more frequently when you’re using tagged unions. If you’re using TypeScript, you can omit this, since TypeScript will catch your type errors at compile time. You’ll have to remember to use strictObject every time you assign to this.state, but it can really save you from some headaches if you remember to use it. Nobody likes getting undefined when they expect a real value. # Appendix: Undo support Have you ever been asked to add “undo” to your application? It can be daunting to figure out where to even start. If you store your state in a tagged union, you have a huge advantage. I will leave the implementation of redo as an exercise for the reader. class App { constructor() { this.state = { mode: "loading" }; this.undoStack = []; } update(state) { this.undoStack.push(this.state); this.state = state; } undo() { this.state = this.undoStack.pop(); } } Note: The | symbol means “this type OR that type” in TypeScript. Normally it’s written like type T = A | B;, but when you have lots of types over multiple lines, you can “line up the pipes” on the left to look nice. This is the “union” part of “tagged unions”. TypeScript will ensure that you check the mode before you access other parts of your state, so you only ever access the right state at the right time. # Appendix: Tagged unions in Vue Vue is well suited to use tagged unions. Just remember to assign the entire state object every time, rather than modifying its properties. # Appendix: Tagged unions for data modeling Tagged unions don’t have to be used for state management. You could use them for data modeling as well. Consider these two ways to model mathematical shapes. The class approach is nice because people expect it, and you can write .area() for both rectangles and circles. But what if you’re getting a shape back from a server as JSON? You have to worry about serializing and deserializing these objects as plain JSON objects. Using plain JSON objects as tagged unions means that anyone can write a function that operates on any plain JSON object. Source: Tagged unions in JavaScript

      Read at 01:48 am, Oct 22nd

    Day of Oct 21st, 2020

    • Hard-Boiled Poker: Book News: Leatherface vs. Tricky Dick: ‘The Texas Chain Saw Massacre’ as Political Satire -- Existentialist musings of an online poker player

      Not long ago I appeared on The Poker Zoo podcast to talk about my book Poker & Pop Culture as well as other things concerning the state of the game today. It was a fun conversation covering a lot of topics, including the history and current state of poker blogs. Near the beginning of the show I gave a quick summary of how this blog came about and where it fit into the larger story of poker blogs circa 2006. I talked about how blogs began to fade away, particularly after Black Friday (April 2011) when this big, global online "community" of poker players to which we all belonged suddenly became fractured, especially from the perspective of those of us in the U.S. I persisted with Hard-Boiled Poker, however, continuing to post every weekday for another five years or so, then still posting quite frequently after that before slowing down to begin working in earnest on Poker & Pop Culture. As we talked about on the podcast, writing P&PC is what really more or less moved me off the blog, as I didn't have the time or mental fuel to write about poker both for the book and here. (And the book is a monster, by the way -- 432 pages, 160,000 words.) As I have noted here before, I can't help but view Poker & Pop Culture as kind of a culmination of my poker writing, bringing together a lot of what I was sharing here on the blog and elsewhere over 12 or so years of writing about the game. I was eager after that to write about something not poker. I had another novel in mind, and in fact was starting to work on it when the novel coronavirus emerged to distract. But by then I already had a different project in the works... and extra motivation, too, thanks to a deal with a house to publish it. That book is now finished and in "pre-production," you might say. Editing, proofing, formatting, etc. The current schedule has it coming out either end of 2020 or early 2021. I mentioned the book at the end of another enjoyable interview I did for Club Poker a while back. When mentioning the book there I suggested that with my next project I had decided to follow the "pop culture" path rather than the "poker" one. In the past, starting a few years before my poker writing, I did some more "academic" writing while teaching full-time. Some of that writing was about horror films, and I placed articles in a few different publications including Film Literature and the Journal of Popular Culture. One of those articles was about The Blair Witch Project. Another concerned The Stepfather and its relationship to an episode in Dashiell Hammett's The Maltese Falcon. Another focused on Halloween III: Season of the Witch and modern horror franchises, more generally speaking. If you're curious what I say in that article, here is a post I wrote for another blog that explains it a bit. You can also check the Wikipedia page for H3 to see a reference to one of the arguments I make in that article. For a time I was considering a book-length project about horror movies. However that was around the time poker stepped in to create a big life detour that included leaving that idea to the side. Readers of the blog know I've also had a significant interest in poker-playing presidents, in particular Richard Nixon who earned a lot of space in the "Poker in the White House" chapter in P&PC. Several years ago I began teaching a second American Studies class at UNC Charlotte that focuses on Nixon alongside my "Poker in American Film and Culture" class. In fact, for a while I thought I might write a short book about Nixon, examining his strange and remarkable political career through the lens of his poker playing. I tell you all of that to help explain how I ended up spending a good part of the last year writing this new book, one that combines my interest in horror films and in Richard Nixon. The book focuses on the 1974 film The Texas Chain Saw Massacre, and the title gives a good idea of the book's approach: Leatherface vs. Tricky Dick: 'The Texas Chain Saw Massacre' as Political Satire. The idea for Chain Saw first came to director Tobe Hooper near the end of 1972. The film finally premiered on October 1, 1974 (46 years ago today). In other words, the movie was conceived, written, shot, edited, and ultimately premiered exactly as the Watergate scandal unfolded, with Nixon resigning (and getting pardoned) shortly before the first audiences got to see Chain Saw. Over the years Hooper in interviews frequently made reference to the film's many social and political subtexts, including directly citing Watergate as having "inspired" Chain Saw. His partner and co-writer Kim Henkel has also made reference to the filmmakers' awareness of the contemporary political context when making the movie. Meanwhile the film itself includes many moments and details that further encourage a reading of the movie as a kind of commentary on Watergate, if you can stop being frightened enough to notice them. My book does a deep dive into those details, providing a minute-by-minute analysis of the movie in order to explore its numerous political messages, many of which pertain to Watergate. I don't argue away other interpretations of the film, or deny other intentions of those who made Chain Saw (including the primary one to scare the hell out of you). Nor do I suggest the film presents a consistent, ongoing "allegory" of Watergate, although I do often liken Leatherface and his murderous family to Nixon and all the president's men. It was a very fun book to write, and I'm hopeful readers will enjoy it when it appears. I think fans of Chain Saw should like it, as should those interested in presidential politics and political satire. Speaking of the latter, I delve quite a bit into other examples of Watergate satire along the way (other films, books, columns, comedy records) as I show how Chain Saw also takes a similar, indirect and (darkly) humorous approach in its criticism of Nixon and his administration. For a film that has been picked over as much as Chain Saw, I do think I was able to cover some new ground with my analysis and comparison of the film to the political horror show happening while it was being made. I also hope the book helps readers understand just how villainous a character Nixon was, and how at the time the film premiered it wasn't at all outrageous to compare him to a mask-wearing, chainsaw-wielding maniac.  I will be sharing more details here soon regarding a publication date for Leatherface vs. Tricky Dick and how to get it. I also look forward to sharing the fantastic cover created by my publisher, Headpress, which is a scream. Stay tuned! Labels: *by the book, Leatherface vs. Tricky Dick, Poker & Pop Culture Source: Hard-Boiled Poker: Book News: Leatherface vs. Tricky Dick: ‘The Texas Chain Saw Massacre’ as Political Satire — Existentialist musings of an online poker player

      Read at 01:57 pm, Oct 21st

    • I need to learn about TypeScript Template Literal Types - DEV

      It's Sunday in New Zealand and I don't want to get out of bed yet, so instead I'm going to listen to the new Menzingers album and learn about TypeScript Template Literal Types and write down what I found out as I go! TypeScript string types: Let's start with what I already know. TypeScript has a string type. It covers all strings like const hello = "Hello World";, or const myName = `My name is ${name}`;. You can also use a string literal type, such as type Hello = 'hello', which only matches that specific string. You can use Union Types to be combine string literal types to be more precise about allowed string inputs. One good example is type Events = 'click' | 'doubleclick' | 'mousedown' | 'mouseup' | ...; There are limitations to what TypeScript can know. Template strings will cause specific string types to expand out to the generic string type: type A = 'a'; const a: A = `${'a'}`; // Argument of type 'string' is not assignable to parameter of type '"a"'. In my experience, once you start typing stuff with specific strings you often end up duplicating a bunch of stuff too. Take the Events example from before: type EventNames = 'click' | 'doubleclick' | 'mousedown' | 'mouseup'; type Element = { onClick(e: Event): void; onDoubleclick(e: Event): void; onMousedown(e: Event): void; onMouseup(e: Event): void; addEventListener(eventName: Event): void; }; If I add a new event name to the EventNames type, I also have to change the Element type! That's probably fine most of the time, but it could cause issues. Template Literal Types "basics" (Spoiler: it's not basic at all!) The PR where Template Literal Types looked cool when I first read it, and people got pretty excited! Then the TypeScript 4.1 Beta release notes came out, so I'm going to go through that first. TypeScript 4.1 can concatenate strings in types using the same syntax from JavaScript: type World = "world"; type Greeting = `hello ${World}`; // same as // type Greeting = "hello world"; Using Union Types with concatenation enables combinations: type VerticalAlignment = "top" | "middle" | "bottom"; type HorizontalAlignment = "left" | "center" | "right"; type Alignment = `${VerticalAlignment}-${HorizontalAlignment}` declare function setAlignment(value: Alignment): void; setAlignment("top-left"); // works! setAlignment("middle-right"); // works! setAlignment("top-middel"); // error! There's also some fancy new mapping syntax which means I can change the Element type from before: type EventNames = 'click' | 'doubleclick' | 'mousedown' | 'mouseup'; type Element = { [K in EventNames as `on${Capitalize<EventNames>}`]: (event: Event) => void; } & { addEventListener(eventName: EventNames): void; }; // same as // type Element = { // onClick(e: Event): void; // onDoubleclick(e: Event): void; // onMousedown(e: Event): void; // onMouseup(e: Event): void; // addEventListener(eventName: Event): void; //}; That's pretty deep - it takes each of the strings in the EventNames type, passing it to a Capitalize type, and prepending on to each of them! Now if I add a new event name to the EventNames, the Element type will already reflect it! These new features are obviously really powerful, and people have been making some amazing stuff, e.g.: I am so sorry for this... I wrote a JSON parser using @typescript's type system typescriptlang.org/play?ts=4.1.0-… 20:12 PM - 04 Sep 2020 An expression parser that supports: - Natural numbers - The five main operations (+*-/%) - Correct operators precedence - Parentheses* All written in TypeScript _at the type level_! * It gives an error because of infinite recursion, but it seems to work typescriptlang.org/play?useDefine… twitter.com/NicoloRibaudo/… 20:21 PM - 11 Sep 2020 Who's now going to write a JavaScript parser using TypeScript's type system? 😏 https://t.co/tKsMbRcDyj Grégory Houllier collected some of these examples into one place, so I can see how they work by looking at the implementations! Type-safe string dot notation: Wow! 😱 I just made this dot notation string type-safe with TypeScript 4.1 I wanted this for so long! typescriptlang.org/play?ts=4.1.0-… 13:45 PM - 25 Sep 2020 Full implementation here. What does it do? const user = { projects: [ { name: "Cool project!", contributors: 10 }, { name: "Amazing project!", contributors: 12 }, ] }; get(user, "projects.0.contributors"); // <- I want this string to be type-safe! I thought I was starting with an easy one, but it's still pretty complex! I simplified it a little bit (and probably broke it) but it'll be easier to figure out - my implementation is here. How does it work? I'll look at PathValue first. type PathValue<T, P extends Path<T>> = P extends `${infer Key}.${infer Rest}` ? Key extends keyof T ? Rest extends Path<T[Key]> ? PathValue<T[Key], Rest> : never : never : P extends keyof T ? T[P] : never; This is the code that take an object and a valid path to an object and returns the type of the value at the end of that path. Conditional types are really hard to process, so I'm going to rewrite it how I think about it. PathValue is a generic type so it's kind of like a type function, and it takes two things, T which could be anything, and P which has to be a valid Path for T. PathValue is also a conditional type - it has the shape A extends B ? C : D. In this case it has several nested conditionals! But each of the never bits is a condition that doesn't return a type, so I can simplify it down to the two valid condition paths. That looks something like this: typefunction PathValue (T, P: Path<T>) { if (P extends `${infer Key}.${infer Rest}` && Key extends keyof T && Rest extends Path<T[Key]>) { return PathValue<T[Key], Rest>; } if (P extends keyof T) { return T[P]; } } Since the first condition actually calls PathValue again, this is a recursive conditional type 🤯🤯🤯. There are two base conditionals, one continues the recursion, the other ends it. Again I'll look at the "easier" one first. if (P extends keyof T) { return T[P]; } If P is just a string and it is an exact key of T, then return the type of that key. That means it is the end of the path and it can stop recursing. The other condition is the magical bit. if (P extends `${infer Key}.${infer Rest}` && Key extends keyof T && Rest extends Path<T[Key]>) { return PathValue<T[Key], Rest>; } Here's the fancy new bit: P extends `${infer Key}.${infer Rest}` This type says "check if the string contains a '.', and give me the two string literal types either side of the '.'". The equivalent JavaScript would be something like: const [Key, Rest] = P.split('.'); The next part of the conditional takes the first string literal (Key) and makes sure it is a valid key of T: The last part of the conditional takes the second string literal (Rest) and makes that it is a valid Path for the type of T[Key]. So in the case of the example: const user = { projects: [ { name: "Cool project!", contributors: 10 }, { name: "Amazing project!", contributors: 12 }, ] }; get(user, "projects.0.contributors"); If these conditions are all true, then the recursion continues and you go to the level in the object, and the next chunk of the dot-notation string. That kind of makes sense and I now kind of understand P extends `${infer Key}.${infer Rest}` which seems pretty important. Next up is the Path type: type Path<T, Key extends keyof T = keyof T> = Key extends string ? T[Key] extends Record<string, any> ? | `${Key}.${Path<T[Key], Exclude<keyof T[Key], keyof Array<any>>> & string}` | `${Key}.${Exclude<keyof T[Key], keyof Array<any>> & string}` | Key : never : never; Again I'm going to write it out in a different way: typefunction Path<T, Key extends keyof T = keyof T> { if (Key extends string && T[Key] extends Record<string, any>) { return `${Key}.${Path<T[Key], Exclude<keyof T[Key], keyof Array<any>>> & string}` | `${Key}.${Exclude<keyof T[Key], keyof Array<any>> & string}` | Key; } } This says that it Key is a string, and the type of the property on type T (Aka T[Key]) is a Record, then return some fancy union. There are three parts to the union: `${Key}.${Path<T[Key], Exclude<keyof T[Key], keyof Array<any>>> & string}` `${Key}.${Exclude<keyof T[Key], keyof Array<any>> & string}` Key; What does the Exclude<keyof T[Key], keyof Array<any>> bit mean? It uses TypeScript's built-in Exclude type which will remove any types in the second parameter from the first. In this specific case, it is going to remove any valid key for an Array (e.g. push, map, slice). I guess this also includes Object keys, but I'm not super sure how that works off the top of my head. This bit seems to me to be a bit of a nice to have, as it reduces the final set of possible paths a bit, but I can ignore it for now. That gives me: `${Key}.${Path<T[Key], keyof T[Key]> & string}` `${Key}.${keyof T[Key] & string}` Key; The & string bit is a little trick to reduce keyof T[Key] down to only being a string - I think because you can have symbol keys as well. So I can ignore that too: So the final union is basically: `${Key}.${Path<T[Key], keyof T[Key]>}` | `${Key}.${keyof T[Key]}` | Key; This is another recursive type, where each level of recursion is concatenating the valid key paths like `${Key}.{Path}`, so you get `${Key}.{Path}` | ${Key}.{(`${Key}.{Path})`} | `${Key}.{(`${Key}.{Path})`}` ... etc. That handles all the deeply nested keys. That is combined with the very next layer of keys ${Key}.${keyof T[Key]}, and the current keys Key. So at a high level there are two recursive types, one with recurses through the valid keys of an object and builds up the whole valid set, using Template Literal Types to concatenate the keys with a ".". The other type splits the concatenated keys and works out the type at each layer of the path. Makes sense I think? Pretty powerful stuff if you hide it away behind a nice API in a library. Type-safe document.querySelector: Ok, just _one more_ TypeScript 4.1 experiment: Reimplementation of document.querySelector() but this version parses complex CSS queries and infers the correct return type 🔥 Get the code: bit.ly/2RLEBvU 18:24 PM - 22 Sep 2020 Full implementation here. What does it do? This one is a little different, as it doesn't validate that the string is a valid CSS selector (although I'm pretty sure that would be possible with these new types), but it does figure out the best type of the result of the query: const a = querySelector('div.banner > a.call-to-action') //-> HTMLAnchorElement const b = querySelector('input, div') //-> HTMLInputElement | HTMLDivElement const c = querySelector('circle[cx="150"]') //-> SVGCircleElement const d = querySelector('button#buy-now') //-> HTMLButtonElement const e = querySelector('section p:first-of-type'); //-> HTMLParagraphElement How does it work? Let's look at some of the helper types first: type Split<S extends string, D extends string> = S extends `${infer T}${D}${infer U}` ? [T, ...Split<U, D>] : [S]; type TakeLast<V> = V extends [] ? never : V extends [string] ? V[0] : V extends [string, ...infer R] ? TakeLast<R> : never; type TrimLeft<V extends string> = V extends ` ${infer R}` ? TrimLeft<R> : V; type TrimRight<V extends string> = V extends `${infer R} ` ? TrimRight<R> : V; type Trim<V extends string> = TrimLeft<TrimRight<V>>; These are super clever, and seem like they could live alongside Capitalize etc in the base TypeScript types. Split: type Split<S extends string, D extends string> = S extends `${infer T}${D}${infer U}` ? [T, ...Split<U, D>] : [S]; Again I'm going to rewrite it: typefunction Split<S extends string, D extends string> { if (S extends `${infer T}${D}${infer U}`) { return [T, ...Split<U, D>]; } return [S]; } So there is another recursive type that takes an input string, and some splitter string D. If the input string contains the splitter string, the part of the string that comes before the splitter is put into an array, and then the second part of the string is passed to the Split type again. The result is splatted (...) which means that the final result will be a single flattened array of strings. If the input string doesn't contain the splitter, then the whole string is returned. It's wrapped in an array so that the splat works. TakeLast: type TakeLast<V> = V extends [] ? never : V extends [string] ? V[0] : V extends [string, ...infer R] ? TakeLast<R> : never; This one doesn't have anything to do with Template Types particularly but it's still interesting. Rewriting gives me something like this: typefunction TakeLast<V> { if (V extends []) { return; } if (V extends [string]) { return V[0]; } if (V extends [string, ...infer R]) { return TakeLast<R>; } } One change I might make to this would be to have type TakeLast<V> be typefunction TakeLast<V extends Array<string>>? That would limit the valid input types and possibly give an easier error message. Three different paths through here: 1) If the array is empty, return nothing. 2) If the array contains one element, return it. 3) If the array contains more than one element, skip the first element and call TakeLast on the array of remaining elements. TrimLeft/TrimRight/Trim: type TrimLeft<V extends string> = V extends ` ${infer R}` ? TrimLeft<R> : V; type TrimRight<V extends string> = V extends `${infer R} ` ? TrimRight<R> : V; type Trim<V extends string> = TrimLeft<TrimRight<V>>; More Template String types here: Trim is pretty nice, it just calls TrimRight and then TrimLeft. TrimLeft and TrimRight are basically the same so I'll just rewrite one of them: typefunction TrimLeft<V extends string> { if (V extends ` ${infer R}`) { return TrimLeft<R>; } return V; } And I'll actually rewrite this again cause what it's actually doing is: typefunction TrimLeft<V extends string> { if (V.startsWith(' ')) { return TrimLeft<R>; } return V; } This type recurses until it finds a string that doesn't start with a space. Makes sense, but still very cool to see it as a type. TrimRight is pretty much identical but it does an endsWith instead. StripModifiers The last bit of Template Type magic I want to look at here is: type StripModifier<V extends string, M extends string> = V extends `${infer L}${M}${infer A}` ? L : V; type StripModifiers<V extends string> = StripModifier<StripModifier<StripModifier<StripModifier<V, '.'>, '#'>, '['>, ':'>; That can be rewritten to be something like this: typefunction StripModifier<V extends string, M extends string> { if (V.contains(M)) { const [left, right] = V.split(M); return left; } return V; } Then the StripModifiers type just uses the StripModifier type with each of the characters than can follow an element tag name in CSS: typefunction StripModifiers<V extends string> { StripModifier(V, '.'); StripModifier(V, '#'); StripModifier(V, '['); StripModifier(V, ':'); } The rest of this example uses these different types to split the CSS selector on relevant characters (' ', '>', and ','), and then select the relevant bit of the remaining selector and returning the correct type. A lot of the heavy lifting is done by this type: type ElementByName<V extends string> = V extends keyof HTMLElementTagNameMap ? HTMLElementTagNameMap[V] : V extends keyof SVGElementTagNameMap ? SVGElementTagNameMap[V] : Element; It maps from a string (such as 'a') to a type (such as HTMLAnchorElement), then checks SVG elements, before falling back to the default Element type. What next? The next examples get progressively more bonkers, so I'm not going to write down all my thinking about them - you should check them out though and see if you can see how they work. The JSON parser is probably the best mix of complex and readable. From this I have a couple thoughts: 1) I should definitely use this for TSQuery 2) TypeScript is probably going to need new syntax for types soon because stuff like: type ParseJsonObject<State extends string, Memo extends Record<string, any> = {}> = string extends State ? ParserError<"ParseJsonObject got generic string type"> : EatWhitespace<State> extends `}${infer State}` ? [Memo, State] : EatWhitespace<State> extends `"${infer Key}"${infer State}` ? EatWhitespace<State> extends `:${infer State}` ? ParseJsonValue<State> extends [infer Value, `${infer State}`] ? EatWhitespace<State> extends `,${infer State}` ? ParseJsonObject<State, AddKeyValue<Memo, Key, Value>> : EatWhitespace<State> extends `}${infer State}` ? [AddKeyValue<Memo, Key, Value>, State] : ParserError<`ParseJsonObject received unexpected token: ${State}`> : ParserError<`ParseJsonValue returned unexpected value for: ${State}`> : ParserError<`ParseJsonObject received unexpected token: ${State}`> : ParserError<`ParseJsonObject received unexpected token: ${State}`> is pretty tricky 😅. All in all that was pretty useful for me, I think I get how Template Literal Types work a bit now. I guess I'll see next time I try to use them. Let me know if this was useful, it was a pretty unfiltered and unedited 🙃 Source: I need to learn about TypeScript Template Literal Types – DEV

      Read at 12:48 pm, Oct 21st

    • Andrew Cuomo is no hero. He's to blame for New York's coronavirus catastrophe

      Andrew Cuomo may be the most popular politician in the country. His approval ratings have hit all-time highs thanks to his Covid-19 response. Some Democrats have discussed him as a possible replacement for Joe Biden, due to Biden’s perceived weakness as a nominee.

      Read at 02:43 am, Oct 21st

    • Elon Musk becomes Twitter laughingstock after Bolivian socialist movement returns to power

      Tesla CEO Elon Musk became an internet punchline on Monday after the party of Evo Morales, a left-wing Bolivian president whom Musk intimated that America had every right to overthrow, was restored to power by the Bolivian people.

      Read at 01:52 am, Oct 21st

    Day of Oct 20th, 2020

    • Swaathi Kakarla

      Hello! What's your background and what do you do? Hey there! I’m Swaathi, the co-founder and CTO of Skcript. We help organizations transform with fast product innovations where we focus on technologies like AI, Blockchain and Robotic Process Automation (RPA).

      Read at 02:05 pm, Oct 20th

    • Beware of Misleading HUD Stats - by Carlos Welch

      Out of all the coaches I know, I seem to be one of the few who are hyper focused on exploiting the specific tendencies of my opponents that I observe at the table. Most others seem to stick to a solid game plan that they developed away from the table. I believe in-game adjustments are extremely important in soft small stakes tournaments where players make catastrophic mistakes and are slow to adapt. As I mentioned in last month’s article, one of the primary tools I use to gather this information in online tournaments is a Heads Up Display, or HUD, that records stats based on what a player has done so far. HUDs are great, but they can lead to issues if you misinterpret the data. Here are some tips that will help you avoid some of the pitfalls of inexperienced HUD users. Focus on Number of Opportunities, Not Number of Hands It’s pretty obvious that the efficacy of HUD data relies upon a decent sample size. After an opponent plays the first hand dealt to him, the HUD will show that he is playing 100% of hands. We know this is not the same as him playing 100% of all possible hands because it’s not unusual for him to be dealt a playable hand over a sample size of one. However, if it still shows he is playing 100% of hands after being dealt 20 hands, then that HUD stat becomes useful. After 200 hands, you can take that stat to the bank. However, this is not because 200 hands is an inherently good sample size. It’s because it’s a good sample size for that particular stat which makes another tally every time that player has an opportunity to play, which happens in every hand dealt. This is not true of other stats. The one that comes to mind is the C-bet stat. A sample size of 200 hands may not inherently be a good sample size for the C-bet stat because the player does not have the opportunity to c-bet in all 200 hands dealt. C-betting requires raising preflop and getting called. If a player is passive, he does not raise preflop very often which means he should be getting called a lot less when he does. This leaves him fewer opportunities to c-bet than more aggressive players. Over 200 hands, he may have only had 10 c-betting opportunities, which is not a good sample size. If your HUD shows you that a player c-bets 100% of the time over a sample size of 200 hands, you have to focus on the number of opportunities to determine if this stat is actionable. This opportunities issue comes with all postflop stats because players do not go postflop on every hand dealt. The problem becomes worse the deeper into postflop play you go. For example, river stats have notoriously low opportunity counts. The same goes for stats that require a particular action on a previous street. For example, the River C-bet stat has fewer opportunities than the River Bet stat because it requires the player to have bet the turn in order to have the opportunity for a continuation of his aggression from the previous street. The River Bet stat includes these hands as well as other hands where the player did not bet on the previous street. Preflop Stats are the Most Reliable Unlike postflop stats, preflop stats converge over a smaller sample size because there is an opportunity for preflop decisions in almost every hand. The stats I find the most useful are as follows. VPIP Voluntarily Put In Pot How often the player plays unforced by the big blind. PFR Preflop Raise How often the player raises preflop. RFI Raise First In How often the player open raises folded to. 3-bet 3-bet How often the player 3-bets. Open Limp Open Limp How often the player open limps folded to. Postflop Ag% Aggression Frequency How often the player bets or raises postflop. VPIP, PFR, and 3-bet are universally recognized stats in online poker so I won’t go into too much detail as to why they are useful. RFI is important because it tells us a bit more about how wide a player is opening than PFR does. For example, there are some players who only open a tight range of hands when folded to, but they do a good job of 3-betting lite against players who open too many hands. This raise registers as both a 3-bet and as a PFR, but not a RFI because they were not the first player in the pot. If you notice that this sort of player has a high PFR, you may think he is opening too many hands and 3-bet him lite when in reality, his opening range is very strong. This is a spot where the RFI stat could have saved you. Open Limp is another stat that is extremely useful in soft small stakes games. Do not confuse this stat with the Total Limp stat which includes hands where the player over limped behind another limper. The former indicates much more weakness than the latter. RFI helps us understand the Open Limp stat better because it gives us a clue as to whether or not a limping range is protected. For example, a player who likes to open limp and has a low RFI is much more likely to have strong hands in his limping range than a player who likes to open limp but has a high RFI. More than likely, the second player type is raising with his strong hands and leaving his limping range vulnerable to attacks. Postflop Ag% is the one postflop stat that converges over a small sample size like the preflop stats because it combines all postflop decisions into one and does not require any particular action from the previous street as long as the player gets postflop in some fashion. It takes the total number of postflop bets and raises and divides them by the number of opportunities a player had to make these plays. This gives us the frequency with which a player plays aggressively postflop and indicates how likely he is to c-bet, raise, check raise, or generally bet when given the opportunity. Be Aware of Recent Table Dynamics Given enough opportunities, these stats can usually give you a solid idea as to how a player plans to play going forward, but not always. You have to be aware of recent table dynamics because this is one thing that the HUD cannot tell you. For example, we’ve all seen a tight player who waited all day for Aces, got them cracked, and then lost his mind playing a much wider range than before out of tilt. If you missed the fact that he just got his Aces cracked because you were not paying attention to the showdowns, you may continue to rely on his low VPIP/PFR stats when in reality, this guy may be playing like a maniac now. This can result in you making some tough laydowns that you shouldn’t. I personally take advantage of this lack of attention in my opponent’s games by switching up how I play throughout the tournament. I’m fairly tight in the beginning because I am focusing more on gathering reads than playing hands, especially when there are no antes in play. By the time the antes come in and I have all the information I need, I start to get more involved almost out of desperation because my stack is starting to approach the danger zone. I use the reads I’ve gathered to profitably get out of line against players who are too weak to stop me. The fact that my HUD stats make me look like a nit at that point helps me to sell the story. If I get a double to a more comfortable big stack, I settle down right at the time when my most recent HUD stats make me look like a maniac, assuming that it took me a decent number of hands to get the double. Players who are new to the table and missed my early nit phase will have me pegged as a maniac and make calls they shouldn't against my now solid ranges. Admittedly, it’s hard to know when this is happening, but if you pay more attention to the most recent table dynamics than the HUD stats as a whole, you can often pick up on these changes. The HUD is a very powerful tool that does the vast majority of the work needed to gather reads on your opponents for you. As long as you do not fall prey to these common mistakes, it will help improve your winrate. I hope this article has increased your awareness of how to use your HUD more effectively. Source: Beware of Misleading HUD Stats – by Carlos Welch

      Read at 01:17 pm, Oct 20th

    • Deciding to switch companies.

      Note: this is an article for staffeng.com, written for an audience of folks on cusp of reaching a Staff Engineer role. My father was a professor of economics. After he completed his PhD in his late twenties, he started teaching at one university, got tenure at that university, and walked out forty-some years later into retirement. Working in technology, that sounds like a fairytale. There are very few software companies with a forty-year track record, and even fewer folks whose forty-year career consisted of one employer. There used to be a meme that many engineers spent either one or four years at each company to maximize their equity grants and then bounced on to the next. If that ever happened, it certainly isn’t common behavior for folks who aspire towards or reach Staff-plus roles. Instead, generally those folks stay, and are rewarded for staying, at a given company as long as the circumstances support their success. If those circumstances change, they tend to either leave shortly thereafter or spend a while burning out and then leave after exhausting their emotional reservoir. It takes years to build the visibility and social credibility to get promoted from a Senior Engineer role to a Staff-plus role, which makes it very difficult to walk away if you feel like you’re just one hump away from the promotion. Leaving, it can feel like, means starting over from scratch. Then again, as described by Duretti Hirpa and Keavy McMinn, it’s common for folks to attain their first Staff-plus title by joining a new company. Even with all your internal credibility, sometimes leaving is the most effective path forward. What’s the right decision for you? Before going further, I want to recognize two very different job-switching experiences: one of privileged flexibility and another of rigid constraints. Your residency might depend on a work-sponsored visa. You might be supporting an extended family. You might be constrained to a geographical area with few employers. This advice focuses on the former circumstances, which are more common circumstances for someone who’s deep enough into a technology career to pursue a Staff role. You should absolutely discount it to the extent this doesn’t reflect your circumstances. Why leaving works The company that knows your strengths the best is your current company, and they are the company most likely to give you a Staff-plus role. However, actually awarding the role depends on so many circumstantial concerns, that this isn’t how it works out in practice. If your current team is very senior, it may be hard to justify your impact at the Staff engineer level because it’s being attributed to your peers. Your manager might have a limited budget that doesn’t have room for another Staff engineer. You might lack an internal sponsor. There simply might not be the need for an additional Staff engineer at your company. Any of these can mean that while you ought to be promoted, your current company won’t. Conversely, when you interview for new roles, you can simply keep interviewing until you find a company that’s able to grant the title. The interview process often brings an automatic sponsor with it – the hiring manager – whose incentives will never be more aligned with yours than in the interview process. The technical interviews are an inconsistent and unreliable predictor of success, which is bad for the industry and bad for companies, but works in your favor if you’re set on attaining a Staff-plus role and are willing to conduct a broad search. Interviewing creates the opportunity to play “bias arbitrage,” finding a company that values your particular brand of bullshit disproportionately. That might be a company that values folks with conference speaking visibility, your experience designing APIs, or your PhD thesis on compilers. Similarly, sometimes you’ll get into a rut at a company where your reputation is preventing forward progress. Perhaps you’ve tagged “difficult” after flagging inclusion issues. Maybe you embarrassed an influential Director at lunch and they’re blocking your promotion. A new company lets you leave that baggage behind. Yeah, of course, it’s always an open question whether you can really leave anything behind you in the tech industry. It can feel a bit cliquey at times. If you’ve worked in tech hubs, at larger companies, and for more than ten years, then you almost certainly have mutual connections with the folks interviewing you. If you have a bad run at a company, maybe your manager was a bully or maybe you were going through a challenging period in your own life, it can feel like a cloud poisoning your future prospects. That said, much like the interview process in general, references and backchannel reference checks are deeply random. If you need any further evidence of that, look to the serial harassers who continue to get hired job after job at prominent companies. Things to try before leaving If you’re planning to leave due to lack of interest, excitement, support or opportunity, it’s worthwhile to at least explore the internal waters first. This lets you carry your internal network with you while still getting many of the advantages of switching companies. Depending on your company’s size and growth rate this might not be an option for you, but there are some folks who switch roles every two-to-three years within the same parent company, and find that an effective way to remain engaged and learning. On the other hand, if you’re considering leaving due to burnout or exhaustion, it’s sometimes possible to negotiate a paid or unpaid sabbatical where you can take a few months recharging yourself, often in conjunction with switching internal roles. This is more common at larger companies. (In case you were wondering, no your coworkers taking parental leave is not “on sabbatical” or “on vacation.”) Leaving without a job Speaking of burnout, if you’re particularly burned out, it’s worth considering leaving your job without another job lined up. There’s a fairly simple checklist to determine if this is a good option for you: Does your visa support this? Are you financially secure for at least a year without working? Do you work in a high-density job market, remotely, or are you flexible on where your next job is? Do you interview well? Could you articulate a coherent narrative to someone asking you why you left without a job lined up? Are there folks at your previous company who can provide positive references? If all of those are true, then I don’t know anyone who regrets taking a sabbatical. However, bear in mind that it’s only the folks who took six-month-plus sabbaticals who felt reborn by the experience. Folks taking shorter stints have appreciated them but often come back only partially restored. Taking the plunge If you’re almost at the Staff promotion in your current company, there is absolutely another company out there who will give you the Staff title. Whether or not you’ll enjoy working there or be supported after getting there, that’s a lot harder to predetermine. If your internal reputation is damaged or if you’ve been repeatedly on the cusp of promotion but victim to a moving criteria line, then you should seriously consider switching roles if the title is important to you – at some point you have to hear what your current company is telling you. Conversely, if you’re happy in your current role outside of the title, consider if you can be more intentional about pursuing your promotion rather than leaving. Many folks hit a rut in their promotion path to Staff-plus, and using techniques like the promotion packet can help you get unstuck. If you’ve used all the approaches, taken your self-development seriously, and still can’t get there – it’s probably time to change. That said, it’s easy to overthink these things. Few folks tell their decade-past story of staying at or leaving some job. Source: Deciding to switch companies.

      Read at 01:15 pm, Oct 20th

    • GTO, the Value of Information, and the Nature of the Solution to No-limit Hold ‘em - by Brian Space

      Is GTO (game theory optimal) a way of life? Solvers and computational robots, aka BOTs, are now a staple of the online poker ecosystem. Even in live play, I often find myself in conversations with poker players aspiring to discuss poker concepts rationally. Poker has adopted game theory concepts and terminology, with phrases like Game Theory Optimal evolving to represent other related concepts – that is understandable and how language works. Nonetheless, with the use and abuse of “solvers”, PioSolver a popular example, there is widespread misunderstanding. I personally think solvers have led to net poorer play at all but the highest levels. Even there, the counterfactual experiment would be required to prove they have made a difference -- in their absence poker theory would have evolved thoughtfully and rationally albeit with different data. First, I will clarify what is understood about poker theory broadly in the context of no-limit Texas hold ‘em (NLH). Heads-up play is now played most proficiently by computational algorithms -- by a BOT . The play is complex, including offering insights that have yet to be rationalized. This form of poker has a GTO solution and a formal Nash equilibrium. Still, the BOTs make approximations and are explicitly built to handle only certain bet sizing and have to estimate the value of a move that is out of their training set. Heads-up limit hold ‘em is a numerically exactly solved subset of NLH.  Extant NLH BOTs also often employ other game abstractions but are nonetheless better than humans in almost all cases. Even in multiplayer NLH, Bots are emerging as dominant with the results published in the world’s best scientific journals. A simple yet formidable BOT available for public play and training is PokerSnowie, a poker artificial intelligence that works with severe restrictions yet is still unbeatable for almost anyone. This BOT is evolved, via a neural network, to employ only a subset of wager sizes: checking, ¼, ½, 1x, 2x pot (or all-in if less than 2x pot), for each betting action. It is trained vs. a myriad of strategies but limits the space of its response to those bets. It is further restricted to only employing a single bet size for all of its holdings at a particular game state. Game state is defined as the cards of each player, their chip stacks and the community cards with the action at a defined location. This means that this BOT is restricted to selecting one of its bet sizes for the entire range of hands it might hold at a particular node of the game tree. To be clear, it has a checking and betting range on postflop streets and uses only one of its available sizings for every betting hand. When the expected value (EV) of checking and betting is equal, it will randomize that action but still use only a single bet sizing when aggressing. This remains true even when another bet size has the same or higher EV for a particular card combination. The EV of its betting range is being optimized, not that of the particular holding. This type of approximation is often used commonly by human players preflop where one may open the betting in a hand using a consistent bet size. One might bet three big blinds from a particular position with all the card combinations they chose to continue with and fold the rest. PokerSnowie never splits its ranges and works under this constraint at all streets – it optimizes for all the holdings that will bet at each node of the game tree. We do not want to emulate its behavior in this regard; who wants to use the same bet size for top pair and a full house on the same board? Nonetheless it makes clear how card combinations in a range each work together to support an action at each game state. This paradigm is formidable to face even in this simple realization.  Consider, selecting a uniform bet size preflop prevents opponents from gaining insights into what hands we have when opening the betting – they only recognize that we chose to bet with a group of holdings. While an opponent can safely assume a reasonable strategy has AA in its UTG opening range and not 27, a three big blind UTG open may be AA, 77 or T9s. The opponent can’t simply attack the weaker holdings or fold to the AA. Further, while AA can support a larger opening size with a higher profit, using a particularly large size for just AA would allow opponents to counter-strategize. Especially when stacks are deep, knowing the exact holding of our opponents would allow effective counter strategies and well-designed exploits. That is not to say that the pre-flop opener could not optimally defend a range of only AA but this approach would necessarily weaken the preflop range now depleted of its strongest holding.  In practice, these situations arise in far from ideal play where opponents might open larger with strong holdings to limit the number of callers and “protect” their strong preflop hands. Let me define some terms formally. An equilibrium situation in poker is one with game states that are, for example, iterated to consistency such that all the strategies become invariant in the self-consistent process. They can do no better or worse vs. each other. There will be an EV associated with each strategy at equilibrium. An optimal equilibrium evolves the strategy such that the EV is a maximum under the constraints (e.g. bet sizes, stacks, ranges) input into the solving. One might optimize one or all of the strategies and associated EV. If all the strategies are simultaneously optimized, one gets an optimal equilibrium that, in a zero-sum symmetric game, has to yield zero EV for all the strategies when a stable solution is present. This is known to be true in heads-up NLH where there is a GTO solution and Nash equilibrium, and is more complicated in multi-player versions. Still, such equilibria are found in computational simulations of multiplayer NLH and the situation has been considered formally. Any play outside these strategies is a non-equilibrium excursion, even if the play is part of a different game state’s equilibrium – that is how equilibrium works. Fluctuations from equilibrium are represented at some frequency in other equilibria. In non-equilibrium dynamics, many things are possible – see below. All this is to say, any reference to this body of knowledge is summed up in most poker conversation colloquially by calling anything vaguely equilibrium-like GTO. Reality is richer and more complex. Even the zero EV set of strategies will not be the ultimate solution if constraints on the solution space are present like limiting the choice of bet sizes. There is a strong rationale for betting in ranges where hands can be lumped together supporting some optimal bet sizing. They offer both information hiding and strategic flexibility. The combinations can be considered to support each other. In our preflop opening range, AA will robustly hold a lot of equity on many flops, e.g. vs. a single caller. Nonetheless, on flops like 678, we are grateful to have T9 and 77 to combat an aggressive button defender, especially when deep stacked. The strengths and weaknesses of each combination, interacting with the defending preflop ranges and each distinct flop, together support a particular bet sizing. Ranges at all stages of the game tree work similarly. Preflop, a decision to use a uniform bet sizing for an entire opening range from each position is a common choice. It is supported by theory as simple and capturing most potential EV. One can, however, use multiple ranges for preflop holdings. This is commonly seen when someone might add a limping range in addition to a multiple big blind opening size, e.g. on the button in an ante game. Indeed, under certain conditions, splitting preflop ranges across sizings is possible and optimal opening size and range varies with position. Doing this adds both EV and complexity to the strategy and can allow for playing a wider range of hands. Further, any optimal strategy that removes a constraint in the previous strategy will be more profitable. This is a mathematical result in that the new strategy would otherwise reduce to the old strategy if the optimization did not produce a more lucrative plan. Generally, it is helpful to think of our options as an EV landscape and to steer ourselves into the highly positive regions of this surface. For example, there might be game flow reasons to employ one strategy, with lower equilibrium EV, over another because it affords better exploits in a particular poker ecosystem. This line of thinking led me to assess the value of the hidden information. A bot like PokerSnowie is an opponent that will not adapt its (converged) strategy to an opponent’s play. Note, an optimal or GTO approach is literally the strategy that requires no knowledge of the opponent’s strategy. Indeed, playing a GTO strategy would mean never adapting to an opponent in any way. When people aspire to such things, they are typically misunderstanding poker. One desires a balanced flexible baseline from which to both defend vs. attackers and simultaneously be poised to exploit weaknesses in the approach of an opponent. Profitable poker is counter strategic. For example, if a live poker game is underway and you see all the players approximating a GTO approach – run away from the table. Practically speaking, developing a robust strategy that shows up with strong hands and viable bluffs for the most common situations that arise is a useful paradigm. The more common the situation is, the more precise the strategy is. Use this strategic flexibility to play against your opponent’s strategy. It would be interesting to play vs. PokerSnowie, both knowing its cards and again without that advantage to explicitly quantitate the value of that information. Information is always money in poker. Next, consider two distinct strategies derived from a simulation, both with similar EV. The kind of strategies with fixed yet distinct bet sizes derived from something like PioSolver. To make things concrete, imagine a common Button vs. Small Blind confrontation where many strategies of similar EV are viable. One can play both strategies, perhaps randomized, and form a new strategy. Imagine picking a random number from 0-1 at the onset of the hand and if it’s less than 1/2 we employ strategy A and when the random number is over1/2 then strategy B is used. Distinctly, someone could play strategy A one day and B the next. Both of these present challenges to our opponents in inferring our holdings from our bet size. These considerations imply that playing multiple strategies with weighting coefficients presents an opportunity; strategy A and B can be mixed in any proportion. The logical extension of this is using a distribution of bet sizes for each range in a given game state. Some functional form can be assumed for the distribution of bet sizes. A bet size is then chosen randomly to reproduce the optimized bet sizing probability distribution. A delta function distribution reduces to a single strategy with fixed bet sizes. Further optimizing the distribution itself over function space should represent the complete solution to NLH, in contrast to the approximations of multiple fixed bet sizes. This is an unexplored avenue. Such a strategy removes an additional constraint and provides for significant additional information hiding. In discussing ranges above, it was implicit that ranges bifurcate and split even into more discrete parts as the game tree is explored. We might have a river spot where our full houses and simulant bluffs bet one sizing, while straights make a significantly smaller wager. Indeed, optimal solvers find such spots to be ubiquitous and even find that one should randomize certain holdings between the bet sizing ranges. In the new paradigm there is a probability associated that the nut-flush in the above spot might sometimes bet with the full house sizing and another percentage of the time with the straights. Current strategies allow for leakage of information that is far from optimal. It is easy to imagine bet sizing distributions with significantly overlapping tails that make our opponent’s life in guessing our intentions miserable. Imagine then that one allowed for bet sizing distributions that are optimized for EV. When choosing to bet, one would randomize from the bet sizing distribution to pick a bet size. A very simple distribution is the existing fixed bifurcated sizing example above that is used commonly. What I am suggesting is a continuous distribution that allows for probability from zero to all-in wagers with associated optimal probabilities. Because this reduces to the extant formalism, any success at optimization suggests this is the actual nature of the solution in NLH, or big bet games more generally. In the example above, a large bet would imply a full house or a bluff from a card combination that mimics full houses by blocking them. In our new paradigm, there may be a small probability the straights and their associated bluffs are betting with this sizing, creating yet more information hiding and additional EV. Imagine preflop ranges in which each combination was allowed to bet from a distribution of bet sizes that would clearly overlap. Stronger hands could average larger bet sizes but not reveal themselves as weaker holdings would have a fractional probability of betting the same size. The actual bet size would be randomized from the optimized, predetermined bet sizing probability distribution. This offers the potential for increasing the EV of a strategy while maintaining information hiding. Some accomplished players implement intuitive versions of this strategy by using many different bet sizes for a given game state. A formal implementation with randomization would expand from only using these ideas exploitatively. I believe this is the nature of the actual solution to NLH.  Let me reconsider the use of solvers in today’s poker climate. Simple solvers like PioSolver are useful in very well-defined situations. They are used primarily in heads-up spots and require the input of assumed ranges and bet sizes. People often, in a single simulation, consider multiple bet sizes simultaneously and chose the one that seems best. That is not a mathematically justified experiment as critical card combinations can be split between ranges. Consider, when facing large bet sizings, the ability to make the nuts in that situation and the frequency associated with having the best possible hand will be essential variables. Bet sizing is also going to interact strongly with range choice, and due to the combinatoric possibilities and the computational demand of the solving algorithms, default choices are ubiquitous. Thus, many bet sizing strategies remain unexplored in a follow-the-leader poker universe. Further, in multi-way situations, disparate stack sizes also are critical to the solution as four identical stacks are very different from a short, medium and two large chip stacks. Still, in an online environment, all is not lost as the games tend to play in a homogenous environment with 100bb stacks, very similar professional / regular player strategies and most postflop scenarios are engaged heads-up. Thus, a lot of progress has been made in common confrontations like Button vs. Blinds in this well-defined milieu. Still, the results are far from an EV optimized equilibrium even in these idealized situations. The bet sizing space is under sampled, not randomized, and sensitivity to range uncertainties have not been carefully explored. Note, sensitivity analysis for changes in parameter space providing simple guidelines to modify a known strategy are unexplored territory. Poker is played for large amounts of money by smart, talented people. Nonetheless, most of these folks have little training in quantitative methods and there is no incentive and few mechanisms to share formal progress on interesting questions. Using software like PioSolver for live situations is a different situation altogether. There the uncertainty in ranges is vast if the live game itself is worth playing. Consider, if a recreational player calls an open with JT, whether they include the off-suit combinations changes the mathematics drastically with four suited combinations of JT possible out of sixteen total. Now, the player may or may not have all the or only the suited J9, J8 combinations too -- the situation becomes untenable from a solver derived strategy perspective. I made a large postflop error for 200bb stacks on a 755 flop. I correctly assessed that my two opponents would call the $90 preflop raise with a wide variety of suited holdings. However, when the winning player called my all-in flop bet with T5o off-suit, I had drastically underestimated the number of 5s in their preflop range and played my hand poorly as a result. Understanding range construction / interaction in confrontations is far more important than deciding whether to bet 1/2or 2/3 pot in particular situation. Further consider that many pots become multi-way postflop and one is facing a variety of strategies and stack sizes. Note, a collusive set of independently losing strategies can make the value of your holding change radically. Collusive here is not meant as cheating -- the strategies simply happen to align against your strategy effectively. They can even make it impossible for you to win, even with say the highest EV holding on a flop. The optimal or GTO-like strategy is only guaranteed to win vs. lesser strategies on average when sampled over ranges and possible future game states. A bunch of donkeys can indeed regularly crack your AA and these are the games one should endeavor in which to play. Poker worth playing is counter-strategic. If the games get so competitive that you are only playing a set GTO / equilibrium mimicking strategy, find something else to do. The fun in strategic poker is in constructing strategy in new situations on the fly that plays well vs your opponents.   Brian Space is a scientist and professor seeking people to play Quantum Statistical Mechanics for money. He plays poker but is no old man coffee. Remember the GTO strategy ignores that of the other players – you really want to play like that? His poker articles are available on his web site: http://drbrian.space/poker.html Source: GTO, the Value of Information, and the Nature of the Solution to No-limit Hold ‘em – by Brian Space

      Read at 12:37 pm, Oct 20th

    • The End of Housing as We Know It

      The nail salon in Queens where Mariwvey Ramirez works reopened earlier this month, but the customers have been hesitant to return.

      Read at 12:04 pm, Oct 20th

    • Socialists claim massive victory in Bolivia, one year after being ousted

      LA PAZ, Bolivia — Exit polls issued early Monday showed Bolivia's socialists taking a seemingly insurmountable lead in the country's bitterly fought presidential election, a result that, if confirmed by the official tally, would amount to a massive popular rebuke of the right-wing forces that drov

      Read at 03:58 am, Oct 20th

    Day of Oct 19th, 2020

    • https://revolutionsperminute.simplecast.com/episodes/rent-strike-featuring-marcela-mitaynes-62PSn6B3

      Read at 11:36 pm, Oct 19th

    • The Tenants Who Evicted Their Landlord

      To hear more audio stories from publishers like The New York Times, download Audm for iPhone or Android. Last May, Minneapolis City Council members found their leather seats on the raised dais and looked out at the chamber.

      Read at 11:19 pm, Oct 19th

    • {errorception} blog: The Tale of an Unfindable JS Error

      At Errorception, I genuinely care about making people's front-end code bug-free. Often this means that when people email me asking for help with bugs they are seeing, I wholeheartedly help them track down and fix the bug. Today morning I woke up to an email from a user, asking about a bug that had occurred nearly 4,000 times in the last two days! Hey Rakesh, This just showed up yesterday, and I'm really stumped by it. It's only on Chrome 18/19, and it's put us way over the daily limit. <link to the bug> Just wondering if you seen it before, as I found one reference to it being caused by the Twitter widget (https://gist.github.com/1878283). I've grep'd every piece of JS looking for it, but haven't found one mention. Any ideas? <snip> I rubbed my eyes and got ready to start looking at the user's site, almost certain that he must've missed something. I mean, how could he have an error in his site and not have a source for the error? That just doesn't make sense. Looking at the error report, this is what it said: Uncaught TypeError: Cannot read property "originalCreateNotification" of undefined It's an inline error on the page, not an external JS file. This just has to be something on the site, I thought to myself. Hit the page, viewed the source, and there was nothing there. That's weird I thought, because Chrome usually does a good job of reporting line numbers correctly. As I was brushing my teeth, I got the feeling that I had heard about this error before. originalCreateNotification. Where had I heard about this before? Toothbrush still in my mouth, I did a quick Google search. The only sensible hits were the gist the user mentioned above, and a StackOverflow question where the poster was asking for help with his code. His code used a variable called originalCreateNotification. That's an interesting coincidence I thought to myself, but I was sure that that's not where I had heard about this bug. Then, I decided to search my mailbox for this bug, wondering if someone had emailed me about it before. There I found it. Another user had once mailed me about this error, and that it had occurred 17,000 times within a few minutes, all from the same client! I recollected that we had tried to track down the bug for quite some time, but we couldn't. The error interestingly never happened again. We just let the matter be, hoping to never see the bug again. Two different users facing the same bug! Both the cases were in Chrome, and on line 4! That's just too much of a coincidence, I thought. Something's going on here. Twitter? I jumped to the gist that we had found. It was authored by Pamela Fox. I have tremendous respect for Pamela, and I know that she has her own internal error reporting system. The gist is the code she uses to catch and post errors to the server. She has a list of errors she ignores because its unfixable, and originalCreateNotification is one of them. Clearly, I wasn't the first to find this bug. She has a comment there, simply saying // Twitter widget. So Twitter seems to be responsible for this problem. That also explains why two different users got the same error. Case closed, I thought. Except, it didn't explain one little thing. How did Twitter generate so many errors in such a short time? It not unusual for files to be somehow unreachable at the client, so it's entirely possible that a small fraction of Twitter's widget users will face an error. However, that would just be an error or two. It can't be tens of thousands of errors. I decided to find out which Twitter widget this is, and find the bug in their code. So, I pulled up my user's page again to find the Twitter widget that he was using, so that I could start looking at Twitter's JS code. That's when the case became far more complex. The user didn't have any Twitter widgets on the site! Is there anybody out there? Back to square one, I thought, as I prepared my green tea. By now, I was too deep into this. I wanted to find and solve the problem once and for all. Two of my users had already noticed this error, and it wasn't cool that I wasn't helping them fix it. How is it that no one else has mentioned the error on the web anywhere? Of course, one reason is that hardly anyone even tracks JS errors in the first place. It's a shame, I thought. Maybe a lot of people face this error and are completely unaware of it. Back at my laptop, I decided to ssh into my server, and find any other occurrence of this error across all my error logs. Ran the query. 57 sites had seen this error occur at some time or the other! All of them with similar characteristics. They all occurred in short bursts, tens of thousands of times! And all of them were on Chrome, at line 4! I was onto something, but I wasn't sure what it was. I needed to get more information. So, I took all the user-agent strings of the browsers where this error occurred, and started looking at them for patterns. The browser version numbers were too varied to extract any meaningful information. It seems to occur on all versions of Chrome, I thought. Not too useful. Dead end. OS specific? It can't be OS specific, can it? It didn't make sense for a bug in Chrome to be OS specific. That era of browser bugs is long gone, and I've never experienced such a thing with Chrome. In any case, I started mining the UA strings again, and found that it occurred on all versions of Mac and Windows. No dice. There wasn't a single instance of Linux, but that's probably just a coincidence, I thought. It may be because Linux has a lower market-share on desktops. With no other lead, and almost ready to give up, I was staring at the Google search results page for originalCreateNotification. Just two sensible hits. Pamela's gist, and a StackOverflow question. I decided to read the StackOverflow question as I kept pondering what the cause of the error might be. The question read: I'm attempting to override the default functionality for webkitNotifications.createNotification and via a Chrome extension I'm able to inject a script in the pages DOM that does so. […] A Chrome Extension! Why didn't I think of that! I decided to find out more about the author of the question, and landed up on his site, where he was advertising a Chrome Extension. Clicked through, and I landed up on the Chrome Web Store. This extension is meant to create more native Webkit notifications on Linux Operating Systems which use the notify-osd notification framework. Notify-OSD is installed on Ubuntu by default and used for all system notifications. This has to be it! It explains why the bug only exists on Chrome, why it occurred at the same place at all times, why it was independent of the site or its contents, and more importantly why it never occurred on Linux! Time to do a couple of quick checks to test my theory. Fortunately, the extension code is open-source. With my heart racing, I pulled up the repo, and started browsing it to find a mention of originalCreateNotification, and there it was, staring me in the face! It was clearly an extension designed for Linux, so it made sense that the author might not have tested it on other platforms. I decided to give it a spin. I installed it on my Mac, and BAM! My console filled up with Uncaught TypeError: Cannot read property "originalCreateNotification" of undefined, on line 4! I don't think I've been happier seeing an error. I've now filed a bug report, asking the author to do what he can to make the extension fail gracefully on Mac and Windows. Meanwhile, I've added originalCreateNotification to my blacklist, so that none of my users will ever see this problem again. Another day, another bug squashed. Just another perk of working with Errorception. You are tracking JS errors, aren't you? If you aren't, start now! Source: {errorception} blog: The Tale of an Unfindable JS Error

      Read at 10:43 pm, Oct 19th

    • The History of 'Stolen' Supreme Court Seats | History | Smithsonian Magazine

      As the Trump administration seeks to fill a vacancy on the Court, a look back at the forgotten mid-19th century battles over the judiciary Old Supreme Court Chamber in the U.S. Capitol (dkfielding / iStock) smithsonianmag.com March 20, 2017 | Updated: September 25, 2020 Editor’s Note, September 25, 2020: This article was published after Merrick Garland’s nomination to the Supreme Court expired after Senate Republicans declined to hold a vote on President Barack Obama’s nominee because it was an election year. As controversy continues over the push to replace the late Justice Ruth Bader Ginsburg in another election year, this piece about past battles over nominations to the Court became relevant again. A Supreme Court justice was dead, and the president, in his last year in office, quickly nominated a prominent lawyer to replace him. But the unlucky nominee’s bid was forestalled by the U.S. Senate, blocked due to the hostile politics of the time. It was 1852, but the doomed confirmation battle sounds a lot like 2016. “The nomination of Edward A. Bradford…as successor to Justice McKinley was postponed,” reported the New York Times on September 3, 1852. “This is equivalent to a rejection, contingent upon the result of the pending Presidential election. It is intended to reserve this vacancy to be supplied by Gen. Pierce, provided he be elected.” Last year, when Senate Republicans refused to vote on anyone President Barack Obama nominated to replace the late Justice Antonin Scalia, Democrats protested that the GOP was stealing the seat, flouting more than a century of Senate precedent about how to treat Supreme Court nominees. Senate Democrats such as Chuck Schumer and Patrick Leahy called the GOP’s move unprecedented, but wisely stuck to 20th-century examples when they talked about justices confirmed in election years. That’s because conservatives who argued that the Senate has refused to vote on Supreme Court nominees before had some history, albeit very old history, on their side. What the Senate did to Merrick Garland in 2016, it did it to three other presidents’ nominees between 1844 and 1866, though the timelines and circumstances differed. Those decades of gridlock, crisis and meltdown in American politics left a trail of snubbed Supreme Court wannabes in their wake. And they produced justices who—as Neil Gorsuch might—ascended to Supreme Court seats set aside for them through political calculation. “There is this tendency to view history through rose-colored glasses from time to time, and to suggest we’ve never been this political,” says Charles Gardner Geyh, a law professor at Indiana University and author of the 2008 book When Courts and Congress Collide. “In reality, we have always had a highly politicized selection process.” Several times in the 1800s, Geyh says, “the Senate certainly appears to have delayed with an eye toward saving the nomination for the next president.” Though Garland’s failed nomination was far from unprecedented, at least one aspect of the modern Republican Senate’s move was new. The mid-1800s seat-snatchings took place before hearings on Supreme Court nominees were standard protocol, and before nominations were the subject of much open debate. So the historical record of why the Senate ran out the clock on the early nominees is thin, leaving historians to interpret its political motives from news accounts and correspondence of the time. Past senators kept their political motives unspoken; today’s admit them with pride. “On several of these failed nominations, there seem to have been ostensible merit-based objections,” says Geyh. “Even you if can look at it and raise your eyebrows, and say, ‘Well, that really doesn’t seem like the real reason,’ they at least felt they needed that fig leaf. There was no such fig leaf with Garland.” Battles over a president’s late-term judicial nominations are nearly as old as the Constitution itself. Thomas Jefferson’s successful fight against John Adams’ “midnight judges,” appointees rushed through in Adams’ last days in office in 1801, led to the famed Supreme Court case Marbury vs. Madison. While the case is well known for establishing the court’s power of judicial review, its facts are less remembered. Just before Adams left office, Congress created dozens of new judicial positions. Adams quickly appointed men to fill them. When Jefferson took office, he refused to acknowledge some of Adams’ judicial appointments. William Marbury, an Adams appointee for District of Columbia justice of the peace, sued to receive his commission anyway, but lost the case. Jefferson later convinced Congress to abolish the new judgeships. The next big nomination battle, also after an election, involved Adams’ son. In December 1828, two weeks after Andrew Jackson defeated incumbent John Quincy Adams in the Electoral College, Adams nominated Kentucky lawyer John Crittenden to replace Justice Robert Trimble, who had died that August. The Senate, voting largely along partisan lines in February 1829, postponed Crittenden’s nomination, as well as two of Adams’ three December nominations for federal district judgeships. That the Senate was saving the seat for Jackson to fill was lost on no one. “What a set of corrupt scoundrels,” Kentucky congressman John Chambers wrote to Crittenden, “and what an infernal precedent they are about to establish.” In 1844, the Senate went a step further, blocking President John Tyler from filling a Supreme Court seat before an election. Tyler, the first unelected president, ascended from the vice presidency in 1841 after William Henry Harrison’s death. His fights with his fellow Whigs started quickly, and in 1842, they threw him out of the party. By 1844, when the deaths of two justices gave Tyler two Supreme Court seats to fill, the Senate was in no mood to accept his nominees. Stubbornly, Tyler nominated his brusque, short-tempered Treasury secretary, John C. Spencer, for the first open court seat in January 1844. The Senate rejected Spencer, 26-21, after a closed debate, with most Whigs voting against him. Spencer’s personality and politics both played a part in his defeat; Whigs felt that his decision to accept a spot in Tyler’s cabinet was traitorous. But historians think politics played a larger role in what happened next. In March, Tyler put forward Reuben Walworth, chancellor of New York’s state court system, followed by Edward King, a well-respected Pennsylvania judge, for the two open seats. The Senate sat on both nominations for almost a year without explanation. “The heated contest which had long prevailed between the President and the Whig Senate made it unlikely that his appointments would be confirmed,” Charles Warren wrote in his 1922 book, The Supreme Court in United States History. What’s more, noted Warren, Crittenden—the rejected 1828 nominee—was a favorite for the Court if Henry Clay, also a Whig, won the election. The prospect of a 16-years-too-late victory may have motivated Walworth’s toughest critics. They included Whig Thurlow Weed of New York, who called Walworth “odious,” “querulous,” and “disagreeable” in a letter to Crittenden. But that's not why Walworth never became a Supreme Court justice. In February 1845, after Democrat James K. Polk beat Clay, Tyler substituted two new nominees for Walworth and King. The Whig Senate allowed Tyler to fill one of the two court seats. He offered Samuel Nelson, another top New York judge. “Nelson was a lawyer of conspicuous ability,” Warren wrote. “The choice was so preeminently a wise one that the Senate at once confirmed it.” Tyler’s late replacement nomination for King, though, was tabled without a vote. Once Polk took office, he filled the seat with Pennsylvania judge Robert Grier, who served on the Supreme Court for 21 years. It’s no coincidence that Tyler and the next two presidents to be denied Supreme Court nominations in an election year are among the least-respected presidents in American history. Tyler, Millard Fillmore and Andrew Johnson were the first unelected presidents, political misfits who ascended from the vice-presidency after presidents’ deaths and quickly fell into deep conflicts with Congress. “It doesn’t help that these guys are not only [considered] illegitimate, but despised,” says Geyh. Fillmore, the last Whig president, was a famously disagreeable man who started his administration by firing the late Zachary Taylor’s entire cabinet. By the time Justice John McKinley died in 1852, Fillmore had already lost his party's nomination for a second term in office. “Everyone knew he had already lost,” says Geyh, “so he was doubly de-legitimated.” On August 16, Fillmore nominated Edward A. Bradford, a Louisiana attorney. The Democrat-controlled Senate adjourned two weeks later without confirming Bradford, offering no explanation. Pierce did win the presidency, so lame-duck Fillmore tried twice more, nominating U.S. Sen. George E. Badger, then New Orleans attorney William Micou, in early 1853. But the Senate ran out the clock. “It acquired almost a flavor of the pathetic,” Geyh says. “[Fillmore] could produce the second coming of Jesus Christ and nothing was going to happen.” Pierce's justice, John Campbell, was a Democrat from Alabama who joined the court’s pro-slavery majority in Dred Scott vs. Sandford and vacated his seat to join the Confederacy as assistant secretary of war in 1861. The most audacious block of a president’s ability to name a justice came in 1866, when new president Andrew Johnson tried to fill a Supreme Court seat left vacant for months, and Congress killed the nomination by shrinking the size of the court. Abraham Lincoln had named Johnson, a Tennessee Democrat, his 1864 running mate to balance his ticket, but in 1866, Johnson and Congress’ radical Republicans began openly feuding over how to treat the South during Reconstruction. Johnson’s April 16, 1866, nomination of Henry Stanbery, a former Ohio attorney general and advisor to the president, was doomed from the start. Three weeks earlier, Johnson had vetoed the Civil Rights Act of 1866, which granted ex-slaves full citizenship rights. Congress overrode his veto and passed the law anyway. Word in Washington was that Stanbery had encouraged the veto and possibly even drafted the veto statement. “This, from the radical standpoint, is an unpardonable offense,” wrote the Cincinnati Enquirer’s Washington correspondent on April 21. “This very fact will probably defeat the confirmation of Mr. Stanbery as Judge, not directly, however, but indirectly.” The Enquirer correspondent correctly predicted that the Senate would block Stanbery by approving a pending House bill to reduce the Supreme Court’s size. In July, the Senate voted unanimously to reduce the Supreme Court from ten justices to seven as vacancies opened up. Days afterward, Johnson successfully nominated Stanbery for attorney general instead. (Why didn’t Johnson veto the court bill? Perhaps he thought Congress would override him again: it passed the House with a veto-proof majority of 78-41.) Did Congress trim the court’s size to sandbag Stanbery and Johnson? Historians disagree. Some argue that the bill addressed concerns from sitting justices that a court of ten was too big. But the timing of the move – just days after Congress overrode Johnson’s veto of the second Freedman’s Bureau bill – bolsters the argument that partisan politics motivated the Radical Republicans. Stanbery went on to deliver the successful closing argument for the defense at Johnson’s 1868 impeachment trial. After Ulysses S. Grant succeeded Johnson in 1869, Congress increased the number of justices to nine, a number that’s stood ever since. “[Congress has] developed a norm that you don’t play games with the size of the Supreme Court as a way to score political points,” Geyh says. That precedent grew with the 1937 rejection of Franklin D. Roosevelt’s court-packing plan. Despite the stolen Supreme Court seats of the mid-1800s, says Geyh, the modern Senate’s outright declaration that no Obama nominee would get a hearing or vote in 2016 still violated the Senate’s norms. None of the tabled nominees of the 1800s were federal judges like Garland, whose qualifications the Senate endorsed in 1997 by confirming him for his appeals court seat, 76-23. “You’ve got a consensus choice,” says Geyh, “which makes it all the more bald-faced that the Senate would do as it did.” Skipped pagination Like this article? SIGN UP for our newsletter Erick Trickey is a writer in Boston, covering politics, history, cities, arts, and science. He has written for POLITICO Magazine, Next City, the Boston Globe, Boston Magazine, and Cleveland Magazine Read more from this author | Follow @ErickTrickey American History History of Now Political Leaders Politics Senators .main end .sidebar Get the best of Smithsonian magazine by email. Keep up-to-date on: History Science & Innovation Art Travel Special Offers Email Address Enter your email address Privacy Terms of Use Sign up Source: The History of ‘Stolen’ Supreme Court Seats | History | Smithsonian Magazine

      Read at 09:58 pm, Oct 19th

    • Chrome exempts Google sites from user site data settings

      Chrome exempts Google sites from user site data settings October 7 2020 by Jeff Johnson Support this blog: Link Unshortener, StopTheMadness, Underpass, PayPal.Me In Google Chrome's "Cookies and site data" settings, accessible via the Preferences menu item or directly with chrome://settings/cookies in the address bar, you can enable the setting "Clear cookies and site data when you quit Chrome". However, I've discovered that Chrome exempts Google's own sites, such as Search and YouTube, from this setting. Below I visit Apple's site, which sets some cookies and local storage. My settings allow only Twitter to keep site data, and you can see that all of the Apple data was deleted after quit and relaunch. Now I visit YouTube, which sets some cookies and several other kinds of storage. After I quit and relaunch, the cookies are deleted, but the database storage, local storage, and service workers are still there! (Did you know there are so many different kinds of web storage?) Chrome respects the "Clear cookies and site data when you quit Chrome" setting for apple.com but not entirely for youtube.com. In order to prevent YouTube from saving data, you have to add it to "Sites that can never use cookies". (Note that adding YouTube to "Always clear cookies when windows are closed" is not sufficient.) Now I try visiting Google Search, which sets some cookies and local storage. After quit and relaunch, the cookies are deleted, but the local storage is still there! Again, to prevent this from happening, you have to add google.com to "Sites that can never use cookies". Perhaps this is just a Google Chrome bug, not intentional behavior, but the question is why it only affects Google sites, not non-Google sites. I've tested using the latest Google Chrome version 86.0.4240.75 for macOS, but this behavior was also happening in the previous version of Chrome. I don't know when it started. (Some people are going to read this article and say "Use Safari instead of Chrome!" But it's important to note that Safari doesn't even have the feature to clear site data on quit, so Safari is actually worse. In this respect, Safari is years behind. Firefox and all of the Chromium-based browsers already have the clear site data on quit feature.) Support this blog: Link Unshortener, StopTheMadness, Underpass, PayPal.Me Source: Chrome exempts Google sites from user site data settings

      Read at 04:08 pm, Oct 19th

    • Why the Alt-Right’s Most Famous Woman Disappeared

      Lauren Southern could spew racist propaganda like no other. But the men around her were better at one thing: trafficking in ugly misogyny. Updated on October 20, 2020, at 10:20 a.m. ET

      Read at 02:31 am, Oct 19th

    Day of Oct 18th, 2020