ongoing by Tim Bray

ongoing fragmented essay by Tim Bray

FaviconJag Diary 8: Road Trip! 22 Jan 2019, 3:00 pm

Today I drove the new I-Pace 290.3 mostly highway kilometers. In the best online I-Pace community, the top topic, with 1,115 posts as I write, is I-Pace range. Because when you come to electric cars, range anxiety is a thing. Today’s road-trip report will cover the general highway experience but, since it’s the hot topic, will zero in on range. Spoiler: You can almost always go 300km without trying too hard.

Here’s the trip.

290.3 km of travel

It was a three-leg trip, from Seattle’s downtown to one of its western neighborhoods, then to SeaTac airport, then home to Vancouver; only the last (longest) leg is illustrated. This picture is from the JLR “Incontrol” app, which runs on your mobile and is also a Web site.

What it feels like

The I-Pace is a dream on the big highway. To be fair, this is largely due to it being a well-built modern car with modern features.

  1. I think I already mentioned the seats are fabulous, but it’s worth saying again: really great.

  2. The weather was lousy, between 5°C and 8°C pretty well the whole way, alternating between drizzle and lashing rain. The climate control is actually not as vanishingly perfect as our old 2003 Audi’s, as in sometimes you notice the fans are blowing a little harder than you’d like on your torso or thighs; easy to correct though.

  3. The automatic setting on the wipers did the job, shifting from extra-slow intermittent in the drizzle to bangin’ ’em hard in a tractor-trailer’s wake in a downpour.

  4. The assisted-cruise-control is a treat. You set a maximum cruising speed and when there’s someone in front of you (i.e. almost all the time) it follows them automatically; the default follow distance gives you exactly the two-second gap recommended in safety tips.

    By the way, I am not remotely interested in any “self-driving” capability that falls short of “Tim can open up his laptop and do a code review.” Seriously, what’s the point?

    But I think the assisted-cruise makes the highways globally safer at a level related to the number of people using it.

    I read at least one reviewer who said the Jag’s assisted-cruise implementation wasn’t up there with the best. I can believe it; when a slowpoke pulls out in front of you, the Jag deploys the heavy regen and it can be kind of shocking. And when you get out from behind someone who’s going a whole lot slower then you’ve set the cruise, the Jag decides it’s on a drag-strip.

  5. On the subject of raw power: I drove conservatively, assuming that this lurid-blue lightning bolt would be a cop magnet. But there were a few occasions when I booted it, for example when asshats tried to dart into my two-second gap, and on one occasion when I realized that I was about to be seriously in the way of three cars trying to merge from an on-ramp I hadn’t noticed, and there was no room to move over. Well, tee-hee-hee, there are very few cars in the world that can rocket-launch forward from a starting speed well over 100km/h the way this does.

  6. The car’s whisper-quiet around town (so nice) but when you’re doing 70+mph on rough asphalt in a driving rainstorm, it’s not dramatically quieter than a decent modern fossil car; the tires and all the air and water hitting the car can get in the way if you’re playing soft music.

Bottom line: I’ve driven this route too many times, in a variety of automobiles, my own and rented. The I-Pace got me home feeling a really a lot less stressed and wasted than anything else I’ve done the trip in.

Now, about range

The wisest thing I’ve seen on the subject is on the site in a post by DougTheMac. It’s worth reading in full, but here are a couple of excerpts. This point, on how to think about range, is crucial:

I think there’s a big difference between the required behaviour on a longer-than-usual trip and a very-long-trip. The difference is because on a road trip, you are reluctant to let the SoC get below maybe 20% in case the charger you are relying on isn’t available and you have to divert. Also, on a road trip, you probably only want to recharge up to 80% because the last 20% is very slow. So, the distance between recharging stops on a very long trip is perhaps 60% of the actual achievable range.

But if you start from home with a full charge and pre-conditioned, and your destination at the end of the day is either home again or a destination with 100% certainty of an overnight charge, then you can use 95%+ of the battery capacity with reasonable confidence.

Explanatory note on “conditioning”. You can tell the car what time you plan to depart, and on schedule it’ll get the cabin all pre-heated for you, and if it’s plugged in, also boost the battery to the correct operating temperature. This is said to increase range, but I have no idea how much.

Here are his conclusions (for those of us in metric-land, his breakpoints of 100M, 150M, and 190M are around, respectively, 160km, 240km, and 300km):

  • Trip distance <100M: Charge to 80% (! q.v.), pre-condition, drive to have fun.

  • Trip distance <150M: Charge to 100%, pre-condition, drive to have fun.

  • Trip distance <190M: Charge to 100%, pre-condition, drive a bit more gently, monitor, but expect to get to destination without a charge en route, albeit perhaps down to <5% on arrival.

  • Trip distance 190-310M: Plan for a single en-route charge, ideally from c20% SoC (to give the safety margin required in case the planned charger is unavailable) but only up to the SoC required to get to the final destination with a minimal SoC (5%?). A single 20%-80% charge should add 120M (80%-20%=60%x200M), hence 190+120=310M with 5% on arrival at the “safe” destination. But if you only need an extra 50M, you only put in the required amount to just get you to your safe destination.

Like I said at the top: You can go 300km, assuming you’re sure of having a place to charge when you get there. The road to Settle is 230km with has lots of hills, and there are long stretches where the speed limit is 70mph and everyone cruises at 80. So while a chilly day like today isn’t the North American worst case — that would be something like Canada’s Rogers Pass in midwinter — it’s worse than average.

I used the car’s “comfort mode” both ways; it’s got an “eco mode” which could probably have done better. On the trip down I still had 90km of advertised range when I pulled up to a charger in the basement of an Amazon building, which suggests a total of 320km. Today I went 290.3km at an average speed of 87km/h and had 8% charge when I got home. The car said it had 26km of range left; do the math.


After I unloaded, I headed over to the handy neighborhood fast-charger, where by “fast” I mean 50kW. This is about as close to a “full charge” as the Jag is ever likely to get: 78.174 kWh in 2 hours.


Modern battery-electric cars are just fine for medium-long road trips.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconJag Diary 7: I-Pace Over 100 15 Jan 2019, 3:00 pm

That’s over a hundred kilometers of course, which it took three days to achieve.

  1. I’m enjoying learning how to deploy that electric-Jag power gracefully. Yeah, you can floor it, which is shocking if brutish fun and might damage your neck vertebrae. But you know what’s way sweeter? Coming out of a corner, or around a slower car, and easing the accelerator down, and then further and further down, smoothly. You can’t do this for more than a few seconds without being in seriously unlawful territory anywhere this side of the Autobahn, but oh my goodness those are really very pleasing seconds; the pool of acceleration is bottomless.

  2. I’ve had two charging experiences so far (haven’t needed any, just trying to learn the ropes), both from ChargePoint, both perfectly smooth. You plug in the big heavy connector, fire up the app, hold your phone up to the machine, and away it goes.

    But I’m glad I’m going to have my own charger before the month is over, because the “fast” chargers like the one illustrated here are in demand and not well maintained; of the two in the picture, one is out of order and apparently has been for a couple of months

  3. I-Pace at fast charger I-Pace, fast charger and its infrastructure

    Above, charging in progress; below, another angle. The black box at the right below is supporting infrastructure; the label on the side says 80A worth of 3-cycle power.

  4. There’ve been stories of people having trouble getting their I-Paces to connect to one charger model or another, but I’ve had none on two out of two experiences. The “fast charger” took me from 86% to 96% full in a half-hour. (That’s 11.26 kWh, or almost exactly one Canadian dollar’s worth at residential rates.) This is not as lame as it sounds, because the charging speed declines as the battery fills up, notably slowing down past 80%. Thus, when on a road trip, it’s considered to be smart and courteous, at a roadside fast-charger, to unplug and go at 80%.

  5. The turning radius is large, bigger than our old Honda van’s. Maybe something to do with the wheels being pushed out to the corners? This is particularly annoying because any fool can get into the rhythm of jamming a shift-stick back and forth between Drive/1st and R while three-pointing on a narrow street, but my fingers are nowhere near learning how to hit the Jag’s D and R buttons without looking.

  6. I-Pace car seat adjustments
  7. I’m dealing with angst about the seat position; it’s a paradox-of-choice problem, there are just too many controls. I may have to go all engineer and take a systematic approach, making one change at a time on one lever at a time. I should be clear that it’s a fabulously comfortable seat, it’s just that I can’t prove it couldn’t be better.

  8. The “Park Assist” feature, which has two dedicated buttons, doesn’t work. It’s not just me, I asked my peers and everyone agrees that yeah, it’s just broken. Shame on Jaguar, other modern cars get this right. I wonder if it can be fixed in software?

  9. In fact, it’s a good thing that the I-Pace is so much fun to drive, because it’s generally a pain in the ass to park. The visibility through the tiny dim near-horizontal back window is laughable, the rear-view camera is a little laggy (great picture though), and the car is wide. I’m sure I’ll figure out a way to curve into place, reasonably close to and parallel to the curb, but I haven’t yet.

  10. Driving my fossil car to work was dumb and this one being electric doesn’t change that. Parking is still expensive, as are the L2 chargers in Vancouver office-building basements.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconJag Diary 6: I-Pace Day Two 14 Jan 2019, 3:00 pm

No highways were harmed in the preparing of this blog fragment; our new I-Pace is for the moment an urban runabout. In town, it doesn’t get you anywhere faster than a Honda Fit or 20-year old Ford Focus would. But those burn fossil fuels and we should all try to stop doing that.

[Yes, I said I was going to do live updates to Sunday’s Green Light piece, but that turns out to be annoying. Sorry.]

Jaguar projection

The door handles are flush when parked, but slide out when you unlock. When you unlock at night they glow and project, absurdly albeit with decent kerning, on the adjacent pavement.

  1. I originally wanted to get white paint because I unhealthily loved the red-leather-seats option and thought the white/red combo was cool. And I still think so, but I’m glad Lauren convinced me to go for Caesium Blue. It’s a lovely color, lighter than it seems in online pictures.

    I was afraid the blue/red would look kind of garish but now that I’ve seen it I don’t care. If I worried about that kind of thing I wouldn’t buy a Jaaag, right? I do wish they offered a nice forest green though.

  2. It’s dead cool to pull out your phone, pop open the car’s remote-control app, and set the internal climate to 20°C ten minutes before you go out to drive somewhere on a Canadian winter night.

  3. The headlights have autonomic voodoo; they click into high-beam mode when they think nobody’s in front of you (correctly, so far) and apparently try to point away from the eyes of drivers in oncoming lanes.

  4. Yeah, the infotainment screen navigation is kinda klunky and slow, and the menu tree isn’t the most intuitive thing. But it’s not that big.

    I found that within a day, I had perfected a leisurely wave gesture, somewhat in the style of a Tai Chi master or opium smoker, and achieved menu mastery.

  5. When I plugged in my Pixel 2, Android Auto initially coughed and belched, but the kinks worked themselves out and it’s now a first-class citizen of the big middle screen. It’s kind of cool having Google maps on the big screen and Jag’s own maps, which are more eyecandyful, up on the “instrument panel” screen behind the steering wheel.

  6. Jaguar I-Pace screens including Google Play

    Aesthetics by Jaguar on the left, by Google on the right.

  7. Prior to buying the car, I’d blown off negative comments on its infotainment app by saying “who cares, I’m going to use software from a software company.” But I have to say that the general presentation of the Jag software on Jag’s screens is prettier than Android manages. There’s been attention to typography, there are tonal gradients in the background, and so on.

    Having said that, it doesn’t have my 13,096 songs in the cloud like Google does, and doesn’t know what “OK Google, text Lauren” means, nor “OK Google, Calgary weather.”

    I’m not actually sure what outcome I’d like to see on this one.

  8. The user manual is available as a mobile app; the Android version gets horrible reviews mostly because it only covers a couple of Jaguar models. It’s actually not bad, with a search function and visual guide and rich hyperlinking. There’s also a PDF version online at, and then when you get the car there’s a hefty slab of dead trees in the glovebox.

    However, all of these are sadly incomplete in their coverage. For example: The car comes with a feature where you can have it play whooshy spaceship crescendos as you accelerate, to help petrol-heads who are missing that fossil-combustion roar. No form of the manual told me where in the menus it was hiding. OK, I was trying to impress a 12-year-old, but still.

    Also, I wanted to fool with the instrument panel display, which has many modes. The manual is full of English sentences that have subjects and verbs and objects but somehow failed to map, in my mind, to what I was seeing on the car’s screens. I appreciated the advice I’d picked up in the online Jaguar community, which is that this wisdom is best consumed with the help of a couple of glasses of the Famous Grouse. So I went and bought some, but there are lots better Scotches and it didn’t help that much.

  9. Oh right, online communities. The best for my money is at although, as the name suggests, it’s Euro-heavy. There’s also (note the hyphen) but it’s not as good. On Reddit there’s r/jaguar and the still-embryonic r/ipace.

  10. I’ve signed the car into the home WiFi network and turned mobile data on. It came with an AT&T Canada SIM and despite several pleading emails from AT&T, I have no idea what that does. It was already network-connected somehow because the remote-control app tells me where it is and how charged it is and lets me do things like turn on the heating inside.

    I also turned network updates on, and noticed that the software is a couple releases behind the latest. More investigation is definitely required.

  11. Speaking of apps and suchlike, someone is building a Python library to talk to the I-Pace’s API, and making good progress. There are lots of interesting APIs, many of which Jaguar’s remote-control app doesn’t use. Hmmm…

[Life lesson: A really good time to write about something is while you’re learning about it. I learned this from Mark Pilgrim and if you’ve never heard of him that’s OK (if sad).]

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconJag Diary 5: The Green Light 13 Jan 2019, 3:00 pm

I picked up the new family wheels, a 2019 Jaguar I-Pace, on January 12th. Current plans are for this blog fragment to get updates for the next couple of weeks, through a planned road trip, per the “diary” part of the series title.

The wait

I’d reserved a spot in March and ordered the car in July, so it’s been a wait, and I’ve been in a sort of Gatsby-and-the-green-light mode. It’s not as simple as expectations being high, although the car’s won loads of awards; the early shipments have not been problem-free, and the car has picked up a legion of haters — chiefly $TSLA longs, but still.

JLR fail

During the 5-month wait for the car, the number of times Jaguar contacted me: zero. The number of status updates they gave me: zero. The typical delay when I emailed asking what was up: Days. Mind you, I was dealing with salespeople and, from their point of view, the sale was already made.

Maybe I’m just dealing with a particularly lame salesperson or dealership, but in the event that I want to consider another JLR product, you can bet I am not going back to Jaguar Vancouver and I wouldn’t particularly recommend them.

In fairness, they handled the Big Day processes of paperwork shuffling, insurance coverage, taking my money, getting the remote-control app set up, and explaining the car’s workings, with perfect competence.

Jaguar I-Pace close-up

About as expected

First important finding: If you’ve been tracking the online traffic about the I-Pace, which I summarized in Jag Diary 3: What We Know, getting in the car and driving it around won’t present that many surprises. It is more or less what it says on the box, and what the Internet said about the box.

So I’m going to try to restrict myself in this space to findings that are new or refreshing or surprising, which probably means of specific relevance to those who are some combination of Pacific Northwesterners, Internet geeks, urban dwellers, environmentalists, and parents of teenagers.

Journey 1

Day 1 impressions

We picked up the car in the morning, then drove from Vancouver down to Steveston for lunch, then back home with stops for photography and shopping. To the right, the Journey graphic from the I-Pace remote app, which gets scathing reviews on Google Play.

  1. It’s smaller than you’d think. My carport’s still under construction so the Jag’s parked on the street in front of the house, and it’s not one of the larger cars along the block. The contrast between small outside and spacious inside is shocking.

  2. It’s nimble! That electric-motor acceleration is awesome; Unless you’ve stomped the accelerator in a Tesla or I-Pace you just can’t imagine what it feels like. But in practice, it means that if you’re trying to get out of the supermarket parking lot across a couple lanes of oncoming traffic you can Just Do It, it feels like teleportation.

  3. The seats are just awesomely comfortable — I sprung for the upscale-but-not-racing level, with some huge number of adjustments, and it feels like I’m being cradled by a huge warm benevolent being.

  4. The Meridian audio, at first listen, is underwhelming, the bass sounding unnaturally light to my ears. Will report back after further adjustment.

  5. The regen braking is fun, but when Lauren was driving and I was passenging, I found it a little uncomfortable. Perhaps a combination of us developing better one-pedal driving skills and passengers just getting used to it will change this finding. But in the online I-Pace forums, a lot of people are recommending low-regen for comfort, especially on the highway.

  6. Lauren, who’s 5’5" with a light build, found the driving position perfectly comfortable and loved the handling. She’s right — this car loves to warp around a corner at speed.

  7. Parking takes some getting used to. It has lots of sensors and alarms which moan piteously if you get anywhere near the curb or cars fore and aft. My first attempt left it halfway out in the street, and crooked.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconOil Fail 6 Jan 2019, 3:00 pm

Today I learned things that I think every environmentalist and investment manager should know: A coherent argument that we are more or less at Peak Oil. Not the Nineties version, which worried that we might be running out of fossil fuels, but rather that global human petroleum demand is about at its all-time peak and about to start drifting down. Some of the key data points involve electric cars, which I care a lot about, and China, which is always interesting.

The effects are likely not enough to avert the oncoming global-warming disaster, but there are grounds for optimism about reducing its devastation. However, this will very likely tear the guts out of the global petroleum business.

Tweet thread

What happened was, I ran across an interesting Twitter thread starting with the bold claim that the internal combustion engine’s future had been killed and that the coming energy transition would pay for itself. It was compelling and I gave it a retweet, noting that people who live in an environmentalist-green bubble (for example, me) need to be skeptical about things that we’d like to be true. And so we should.

But I was intrigued enough to buy a 105-page PDF called Oil Fall; you can too, for $9, from its author Gregor Macdonald. If you care about these subjects, you should. I sure enjoyed it.

(If you buy it: While you can read it on a Kindle, don’t try, the type’s too small. It might be OK on an iPad. Or you can just pop it open in Preview on a Mac or whatever the Windows equivalent is. It’s nicely typeset and illustrated; the print isn’t dense and the 105 pages go by fast.)

Oil Fall by Gregor Macdonald

The larger stories — of the increasingly-threatening specter of climate change, and the growing viability of renewable energy — aren’t new at all. But there’s one piece of new news: The shocking surge of Electric Vehicles (hereinafter EVs) in China in 2018, concurrent with a decline in overall vehicle sales there. Probably 1.2 million or so EVs were sold, surging to comprise 7% of all sales toward the end of the year. Here are a couple of instructive links: CleanTechnica and

The argument in Oil Fall spends a lot of time on electric cars, since they are at a point of surprisingly high leverage in the global energy economy.

I’m not going to replicate the Oil Fall narrative, but here’s a quick sort-of outline.

  1. Los Angeles as a case study in EV adoption. It’s not the US leader (that’d be the Bay Area) but it’s got huge leverage.

  2. The politics and economics of renewable energy in California and across the US.

  3. The special leverage of the EV on the energy economy.

  4. The structure of China’s (historically coal-dominated) energy economy.

  5. Why China is crucial to the future price of fossil fuel

  6. The timing of the peak in global oil demand.

  7. Efficiencies and inefficiencies in fossil and renewable energy sources.

  8. The issue of energy storage.

  9. Economic cost of transition to renewables; estimates have been around 2% of GDP, but Macdonald thinks it’ll be closer to zero, or even negative. That’s what Alexandria Ocasio-Cortez and the young US progressives are arguing with their “Green New Deal” proposal.

  10. Global-warming prospects.

On the writing

I don’t know much about Macdonald. His resume sounds respectable, and I quote: “He has written for Nature, The Economist Intelligence Unit, The Financial Times of London, The Harvard Business Review, Atlantic Media’s Route Fifty, The Petroleum Economist, and Talking Points Memo.”

The text is well-supplied with numbers and supporting infographics. He is careful to address, for each key point, the objections to his reasoning and alternative scenarios that arise in the case that he’s wrong. I didn’t check his numbers exhaustively, but every one of those that I did try to verify checked out.

The style is that of a professional journalist: Not colorful but extremely clear, readable, and full of named real-world examples illustrating his arguments.

I’m going to dig a little deeper into a couple of points that were new to me and resonated.

On EVs and storage

Anybody who’s a renewable-energy fan needs to have thought a lot about energy storage. The sun only shines during the daytime and sometimes even prevailing winds don’t blow. Macdonald goes deep on the subject, and offered a pretty compelling argument that there are plausible market-driven solutions to meet storage needs.

One part of the argument involves electric cars, and is obvious once you think of it. Planet-wide, we are now building millions per year, and every one is built around a battery ranging in capacity between 20 and 90KWh. Do some multiplication and you get a damn big energy-storage capability, made up of a huge number of independently-owned consumer products. They aren’t terribly fussy what time of day you charge them, they are network-connected, and managing the network to charge them when the capacity is most available feels like a straightforward application of the sort that I work on every day.

On EVs and negawatts

That term “negawatt” was coined way back in 1985 by Amory Lovins, one of the original big energy-policy thinkers. He did the math and showed that the cheapest way to get more power while doing less damage was simply to cut waste. And it’s worked: Our houses are better insulated now, our cars get more mileage, and our appliances run cooler and smarter.

But the global energy-efficiency picture is still terrible. The whole fossil-combustion ecosystem wastes something in the neighborhood of 50% of the available energy. It’s not uniform: For example, a modern natural-gas based generator wastes less. But internal-combustion vehicles waste a lot more, even today the figure is in the 70%-wasted range.

EVs, on the other hand, turn electrons into kilometers at an efficiency well over 90%. Of course, that doesn’t help if you’re using electricity that was generated in a fossil-fueled plant; but traveling in a renewable-fed EV is a rich source of negawatts.

To quote Macdonald: “But, the loss is quite a bit worse with every gallon of petrol poured into an internal combustion engine. Indeed, if your goal was to waste as much energy as possible, you could do no better than to feed gasoline into a billion vehicles, each with their own separate engine, with multiple surfaces from which heat can rise.”

Is oil over?

Of course not, don’t be silly. Everyone agrees that coal is “over” and yet its usage hasn’t plummeted, it’s just been drifting down for a long time, with a temporary spike as China industrialized. Petroleum remains useful in a huge number of industrial chemical processes, and in certain particularly energy-intensive transportation applications, like heavy trucks and probably almost all aviation. We don’t need to stamp out oil to save the planet, just burn less.

My own particular guess is that natural gas is going to be strategic. It’s relatively energy-dense, straightforward to extract and transport, and carbon-light; it feels like a good fit for filling in renewable-energy gaps.

On investing

There’s a trend where “ethical investors” try to steer capital away from the petroleum industries, and I broadly approve, mostly due to fear of climate change. But if Macdonald is right (and he’s pretty convincing) this is also a good way to remove a major source of risk from your portfolio.

Don’t say I didn’t warn you.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSF-5: Serverless Bills? 30 Dec 2018, 3:00 pm

One of the best reasons to go serverless is that you might save a lot of money. This is especially true when your load is peaky and has quiet times; because when your infrastructure isn’t working, you’re not paying. But, not all loads are peaky. And here’s a quote from an AWS internal mailing list: “For every compute load, there’s some level of TPS where Lambda is going to be more expensive than servers.” So, when is that? And how much should you care?

[This is part of the Serverlessness series.]

Saving money with servers

The answer, of course, like always, is “it depends”. But let’s just jump straight to what strikes me as the canonical example of someone for whom serverless didn’t work out financially. To learn about it, check out this blog piece by Cory O’Daniel, From $erverless to Elixir.

Tl;dr: No, wait, I don’t want to do a Tl;dr! The piece is funny and wise and instructive and if you care about this stuff, you should just go read it. Go ahead, I’ll wait and be here when you get back.

There’s no doubt at all that they saved a lot of money. The key lessons I took away from O’Daniel’s piece:

  1. They were smart to start up serverless, the app was running fine and requiring no management.

  2. It was hard to get the serverful version running; they saw success on the third attempt, and ended up needing to know a whole lot about the inner workings of the Erlang ecosystem.

  3. As Cory says, “Mind that we already have an ops team and we already have a Kubernetes cluster running.”

  4. Elixir is massively cool. I want to use it for something before I give up on computers.

And of course, Cory’s closing soundbite (most highlighted on Medium) is worth reproducing:

So, should everyone go and rewrite their Serverless services in Elixir? Roll out Kubernetes? Get a nose piercing? Absolutely not … What everyone should do is think about where your service is going, and can you afford those costs when you get there. If you don’t have a team of ops people and you aren’t familiar with serverful stuff, spending $30k/mo on HTTP requests might be cheaper than an ops team.

Do I agree?

Mostly, I think. Although if Cory worked for me I probably would have been sort of pushy about making absolutely sure that there was no way to keep the current system around and still save some money, rather than toss-and-replace (on the third attempt). I note that a lot of their charges were API Gateway, and there are other ways to get data into the system. The data was on Kinesis, which is fine, but there are cases where something like SQS or Kafka can be cheaper. But in the last analysis, it’s not written in letters of gold on stone that serverless will always be cheaper.

Cheaper serverless

To get a feel for the kind of thing I’d look for, let’s head over to another blog piece, How We Massively Reduced Our AWS Lambda Bill With Go, by Sam Bashton of, a nifty-looking monitoring/troubleshooting service. His narrative sort of echoes Cory’s, off the top: A popular Lambda-based app started running up some big bills, to the point where it was becoming painful.

This particular Lambda (like Cory’s) didn’t do much more than pull some data in over the network and persist it. What was hurting is that they were running the function for each customer, for each AWS region, every five minutes. And since business was good the number of customers was increasing fast; well, you do the math.

Rather than retreating to servers, what they did was smash all those functions together into one, and bring the magic of the Go language Lambda runtime to bear. Let me quote Sam:

In a single morning we refactored the code to use a single Lambda invocation every minute, operating on 20% of the customer base each time. This Lambda spawns a Goroutine per customer, which spawns a Goroutine per region, which spawns a Goroutine per AWS service. The Lambda execution time hasn’t increased significantly, because as before we’re mainly waiting on network I/O - we’re just waiting on a lot more responses at the same time in a single Lambda function. Cost per customer is now much more manageable however, becoming lower and lower with every sign up.

Isn’t that nice? And no throwaway code, either.

Closing thoughts

My opinion hasn’t changed at all: For building software, use serverless where you can. Obviously, “where you can” depends a lot on the specifics of your app and your budget and your sensitivities.

And when you’re working out the costs of serverless vs serverful, ask yourself: How much is it worth to you to not have to manage hosts, or containers, or capacities, or Kubernetes? I don’t know the number, but I’m pretty sure it’s not zero.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconChristian Practice 26 Dec 2018, 3:00 pm

I’m in Regina, Saskatchewan with family for the holidays. Someone said “Let’s go to a Christmas Eve carol service” and five of us did that. We went to Lakeview United, where “United” signifies the United Church of Canada, the biggest Protestant denomination up here. It was uplifting and pleasant and sort of sad. Disclosure: I’m not Christian at all; but still.

Carol Service at Lakeview United

As you can see, the congregation (for the 8PM December 24th service) was sparse and elderly. The statistics are remorseless: Christianity is in decline. The proportion of Canadians who attend church weekly is not far from 10%.

This surprises me, if only because the Church’s tools are still very strong. The voices raised in Christmas hymns were breathtaking (the crowd, while small, sang well), the Gospel’s words telling the Christmas story were beautiful, and Sue Breisch, the Minister, was eloquent and welcoming, broadcasting love and empathy. Here are some of her sermons.

The building is really fine in a Sixties kind of way, its main hall comfortable, with big comfy rocking chairs and coffee tables at the back.

Maybe it’s as simple as this: Even if you don’t actively disbelieve (as I do) the Christian narrative, religion has lost its urgency, and as lives are increasingly filled by the Net and the pressures of late capitalism, there are ways to fill Sunday mornings that feel more important.

Having said that, I enjoyed the words and music intensely — lack of faith didn’t get in the way — and recommend doing this sort of thing from time to time. I also think it might be good for you. And maybe I’m wrong, maybe Jesus is the Way, the Truth, and the Light, and you’ll end up saving your soul from eternal torment.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSF-4: Serverless Latency? 14 Dec 2018, 3:00 pm

Suppose we like the idea of going serverless (we do). What should we worry about when we make that bet? What I hear when I talk to people thinking about it, mostly, is latency. Can a run-on-demand function respond as quickly as a warmed-up Web server sitting there in memory waiting for incoming work? The answer, unsurprisingly, is “it depends”.

[This is part of the Serverlessness series.]

What we talk about when we talk about latency

First of all, in this context, latency conversations are almost all about compute latency; in the AWS context, that means Lambda functions and Fargate containers. For serverless messaging services like SQS and databases like DynamoDB, the answer is generally “fast enough to not worry about”.

There’s this anti-pattern that still happens sometimes: I’m talking to someone about this subject, and they say “I have a hard latency requirement of 120ms”. (For those who aren’t in this culture, “ms” stands for milliseconds and is the common currency of latency discussions. So in this case, a little over a tenth of a second.)

Inside AWS, a claim like that would be met with a blank stare, because latency is way, way more than just a number. To steal from an old joke: Latency is more complicated than you think, even when you think it’s more complicated than you think. Let’s start with a graph:

Latency Graph

To start with, nobody should ever talk about latency without a P-number. P50 means a time such that latency is less than that 50% of the time, P90 such that latency is less 90% of the time, and so on into P99, P99.9; and then P100 is the longest latency observed in some measurement interval.

Looking at that graph, you can see that half the queries completed in about a quarter of a second, 90% in under a second, 99% in under five seconds, and there were a few trailers out there in twenty-something second territory. (If you’re wondering, this is a real graph of one of the microservices inside EC2 Auto Scaling, some control-plane call. The variation is because most Auto Scaling Groups have a single-digit handful of instances in them, but some have hundreds and a very few have tens of thousands.)

Now, let’s make it more complicated.

Running hot and cold

The time it takes to launch a function depends on how recently you’ve launched the function. Because if you’ve run it reasonably recently, we’ve probably got it loaded on a host and ready to go, it’s just a matter of routing the event to the right place. If not, we have to go find an empty host, find your function in storage, pull it out, and install it before we fire it up. The latter scenario is referred to as a “Cold Start”, and with any luck will only show up at some high P-number, like P90 or higher. The latency difference can be surprising.

It turns out that there are a variety of tricks you can use to remediate cold-start effects; ask your favorite search engine. And that’s all I’m going to say on the subject, because while the techniques work, they’re annoying and it’s also annoying that people have to use them; this is a problem that we need to just do away with.

warming up a lambda

Photo: Ryan Mahle from Sherman Oaks, CA, USA -

Polyglot latency

Once the triggering event is routed to your function, your function gets to start computing. Unfortunately, that doesn’t mean it always starts doing useful work right away; and that depends on the language it’s written in. If it’s a NodeJS or Python program, it might have to load and compile some source code. If it’s Java or .NET, it may have to get a VM running. If it’s Go or C++ or Rust, you drop straight into binary code.

And because this is latency, it’s even more complicated than that. Because some of the language runtime initialization happens only on cold starts and some even on warm starts.

It’s worth saying a few words about Java here. There is no computer language that, for practical purposes, runs usefully faster on general-purpose server-side code than Java. That is to say, Java after your program is all initialized and the VM warmed up. There has been a more-or-less conscious culture, stretching back over the decades of Java’s life, of buying runtime performance and being willing to sacrifice startup performance.

And of course it’s not all Java’s fault; a lot of app code starts up slow because of massive dependency-injection frameworks like Spring and Guice; these tend to prioritize flurries of calls through the Java reflection APIs over handling that first request. Now, Java needn’t have sluggish startup; if you must have dependency injection, check out Dagger, which tries to do it at compile time.

The Go language gopher mascot

The take-away, though, is that mainstream Java is slow to start and you need to do extra work to get around that. My reaction is “Maybe don’t use Java then.” There are multiple other runtimes whose cold-start behavior doesn’t feature those ugly P90 numbers. One example would be NodeJS, and you could use that but I wouldn’t, because I have no patience for the NPM dependency labyrinth and also don’t like JavaScript. Another would be Python, which is not only a decent choice but almost compulsory if you’re in Scientific Computing or Machine Learning.

But my personal favorite choice for serverless compute is the Go programming language. It’s got great, clean, fast, tooling, it produces static binaries, it’s got superb concurrency primitives that make it easy to avoid the kind of race conditions that plague anyone who goes near java.lang.Thread, and finally, it is exceedingly readable, a criterion that weighs more heavily with me as each year passes. Plus the Go Lambda runtime is freaking excellent.

State hydration

It’s easy to think about startup latency problems as part of the infrastructure, whether it’s the service or the runtime, but lots of times, latency problems are right there in your own code. It’s not hard to see why; services like Lambda are built around stateless functions, but sometimes, when an event arrives at the front door, you need some state to deal with it. I call this process “state hydration”.

Here’s an extreme example of that: A startup I was talking to that had a growing customer base and also growing AWS bills. Their load was super peaky and they were (reasonably) grumpy about paying for computers to not do anything. I said “Serverless?” and they said “Yeah, no, not going to happen” and I said “Why not?” and they said “Drupal”. Drupal is a PHP-based Web framework that probably still drives a substantial portion of the Internet, but it’s very database-centric, and this particular app needed to run like eight PostgreSQL queries to recover enough context to do any useful work. So a Lambda function wasn’t really an option.

Here’s an extreme example of the opposite, that I presented in a session at re:Invent 2017. Thomson Reuters is a well-known news organization that has to deal with loads of incoming videos; the process includes transcoding and reformatting. This tends to be linear in the size of the video with a multiplier not far off 1, so a half-hour video clip could take a half-hour to process.

They came up with this ultra-clever scheme where they used FFmpeg to chop the video up into half-second-ish segments, then threw them into an S3 bucket which they’d set up to fire a Lambda for each new object. Those Lambdas processed the segments in parallel, FFmpeg glued them back together, and all of a sudden they were processing a half-hour video in a handful of seconds. State hydration? No such thing, the only thing the Lambda needed to know was the S3 object name.

Another nice thing about the serverless approach here is that doing this in the traditional style would have required staging a big enough fleet, which (since this is a news publisher) would have meant predicting when news would happen, and how telegenic it would be. Which would obviously be impossible. So this app has SERVERLESS written on it in letters of fire 500 meters high.

Database or not

The conventional approach to state hydration is to load your context out of a database. And that’s not necessarily terrible, it doesn’t mean you have to get stuck in a corner like those Drupal-dependent people. For example:

  1. You could use something like Redis or Memcache (maybe via Elasticache); those things are fast.

  2. You could use a key/value optimized NoSQL database like DynamoDB or Cassandra or Mongo.

  3. You could use something that supports GraphQL (like AppSync), a protocol specifically designed to turn a flurry of RESTful fetches into a single optimized HTTP round trip.

  4. You could package up your events with a whole lot more context so that the code processing them doesn’t have to do much work to get its bearings. The SQS-to-Lambda capability we announced earlier this year is getting a whole lot of use, and I bet most of those reader functions start up pretty damn quick.

Latency and affinity

There’s been this widely-held belief for years that the only way to get good latency in handling events or requests is to have state in memory. Thus we have things like session affinity and “sticky sessions” in conventional Web-facing apps, where you try to route strongly-related queries to the same server in a load-balanced fleet.

This can help with latency (we’ve used it in AWS services), but comes with its own set of problems. Most obviously, what happens when you lose that host, either because it fails or because you need to bounce it to patch the OS? First, you have to notice that it’s gone (harder than you’d think to do reliably), then you have to adjust the affinity routing, then you have to refresh the context in the replacement server. And what happens when you lose a whole Availability Zone, say a third of your fleet?

If you can possibly figure out a way to do state hydration fast, then you don’t have to have those session affinity struggles; just spray your requests or events across your fleet, trying to stress all the hosts evenly (still nontrivial, but tractable) and have a much simpler management task.

And once you’ve done that, you can probably just go serverless, let Lambda handle smoothing out the load, and don’t write any code that isn’t pure value-adding application logic.

How to talk about it

To start with, don’t just say “I need 120ms.” Try something more like “This has to be in Python, the data’s in Cassandra, and I need the P50 down under a fifth of a second, except I can tolerate 5-second latency if it doesn’t happen more than once an hour.” And in most mainstream applications, you should be able to get there with serverless. If you plan for it.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSF-3: Serverless Everything? 11 Dec 2018, 3:00 pm

Sometimes we fans get a little over-excited and declaim that everything should be serverless. After all, we’re pretty convinced that owning data centers and computers is becoming a thing of the past. Well then, how could configuring your own hosts and paying for them even when they’re not working ever be a good idea? Let’s try to be moderate and pragmatic: Serverless, where possible.

[This is part of the Serverlessness series.]

But what does “Where possible” mean? Here’s a nice concrete example from inside AWS: the Amazon MQ service, which is a managed version of the excellent Apache ActiveMQ open-source message broker.

How Amazon MQ works

To make this usable by AWS customers, we had to write a bunch of software to create, deploy, configure, start, stop, and delete message brokers. In this sort of scenario, ActiveMQ itself is called the “data plane” and the management software we wrote is called the “control plane”. The control plane’s APIs are RESTful and, in Amazon MQ, its implementation is entirely serverless, based on Lambda, API Gateway, and DynamoDB.

I’m pretty convinced, and pretty sure this belief is shared widely inside AWS, that for this sort of control-plane stuff, serverless is the right way to go, and any other way is probably a wrong way. Amazon MQ is a popular service, but how often do you need to wind up a new broker, or reconfigure one? It’d be just nuts to have old-school servers sitting there humming away all the time just waiting for someone to do that. Environmentally nuts and economically nuts. So, don’t do that.

The data plane, ActiveMQ, is a big Java program that you run on an EC2 instance the control plane sets up for you. Client programs connect to it by nailing up TCP/IP connections and shooting bytes back and forth. MQ and its clients take care of the message framing with a variety of protocols: STOMP, MQTT, AMQP, and JMS/OpenWire. This is obviously not RESTful.

Because of the permanent connection (unlike an HTTP API that sets them up and tears them down for each request) the messaging latency can be really, really low. Of course, the downside of that that the scaling is limited to whatever a single broker instance can handle.

Anyhow, the data plane is absolutely not serverless. Is this OK? I’m going to say that it’s just fine. Among other things, at the moment we don’t have a good serverless way to simultaneously use nailed-up TCP/IP connections and dispatch serverless functions; you can imagine such a thing, but we don’t have it now.

Because it’s not serverless its scalability is limited, and you’re going to be paying for it to turn electricity into heat 24/7/365 whether you’re doing any messaging or not. But for a lot of customers who just want somebody else to manage their MQ instances, or who have to integrate with legacy apps that talk JMS or AMQP or whatever, it’s still super useful.

Global weather

Let me give you another example. I was recently talking to a large public-sector weather service that maintains a model of the global weather picture. This needs to be updated all the time, and the update needs to complete in 43 minutes. The actual software is a vast, sprawling thing built up over decades and substantially in FORTRAN. To get acceptable performance, they have to buy insanely expensive supercomputers and run them flat out.

Would serverless be a good idea? Well maybe, but they don’t know to decompose the model into small enough pieces to fit into serverless functions. “Are you looking at that?” I asked. “We’d love to,” was the answer “but anyone who can figure it out will probably get a Nobel Prize. Want to try?”

I think that, since weather forecasting is pretty important to civic well-being, we can all agree that this is another scenario where being serverful is OK. (Well, until someone wins that Nobel Prize.)

Back to the mainstream

Now, let’s stipulate that most customers are writing neither real-time message brokers nor models of the global atmosphere. Most are running something with a Web front end and a database back-end and straightforward application logic in between. These days, it’d be unsurprising if there were events streaming in from dumb machines or user clickstreams or whatever. Why shouldn’t all of this be serverless? I think it usually should be, but there are things that reasonable people worry about and deserve consideration. That’s next.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSF-2: Why Serverless? 10 Dec 2018, 3:00 pm

Well obviously: Frugality, security, and elasticity. But I want more, I want better software.

[This is part of the Serverlessness series.]


The core idea is that when your workload goes to zero, so does your bill. You might save big; Financial Engines is the first case study that Google popped up for me, but I’ve heard muttered stories in the hallways about way bigger savings than that. And then there’s my co-worker who took his school-photographer wife’s Website billings from $10/month down to a few cents.

And we’re not just talking about Lambda. When there are no messages flowing through your SQS queue, you’re not paying anything. When your Step Functions workflow is waiting, it’s just a row in a database. And so on and so on.

From Werner Vogels’ 2017 re:invent keynote

Disclosure: I lobbied to get that soundbite into that keynote.


When you can’t see the servers, that means we’re taking care of them. And since servers fail, our services have to be designed to survive restarts. Which means that we can (and do) bounce them whenever they need patching. So whatever hosts your DynamoDB table or your SNS topic is running on, they’re likely freshly-enough patched to cut the number of known vulnerabilities to just about the minimum possible. [Urgh, upon typing this, it occurred to me to check the uptime on the Linux box hosting this blog, and it’s like a year. The box generating the bits you are now reading is probably a soft target for all the bad guys out there. Ahem.]

Anyhow, there’s no perfect security in this bad old world, but freshly-patched instances really do help a lot.


When we deliver serverless services, what we’re really trying to do is get you out of the business of capacity forecasting. That business sucks. It’s hard, and easy to get wrong; the penalty for estimating low is lousy performance for your customers, and estimating high is throwing away money. So go serverless and let us take care of that for you.

(By the way, I’m not claiming that we’re any smarter about capacity management than you are. When you aggregate all the AWS customers’ traffic, the lumpy local variations even out. So it’s a much easier problem if you’re a public-cloud provider. Of course you have to not mind billions in capex.)

What about software quality?

As I’ve blogged before, that “Frugality, Security, Elasticity” pitch operates at a more or less pure business level. But I’m a technologist and engineer, so I have to ask, are serverless application designs better designs? I think the only honest answer is “We don’t know yet.”

Having said that, my gut is saying “yes”. It helps that I’m an old functional-programming bigot, and the notion of stateless functions in the cloud responding to events gives me a warm glow. There are things we do know: microservices that are connected asynchronously with messaging systems (e.g. SQS, 100% serverless) are more robust and flexible than those that aren’t. But… Can I say we have a basis of experience sufficiently strong to say “Serverless software is better”? Nope.

But I’m pretty sure that going serverless isn’t going to give you a worse design. So you should bloody well go ahead and do it, because: Frugality, Security, and Elasticity.

But wait…

Look at that picture above from Werner Vogels’ keynote at the 2017 re:Invent.

Then consider the fact that you have a finite time budget for software design. If you go serverless, then you you don’t have to design Kubernetes flows or Auto Scaling policies or fleet-health metrics or any of that other stuff. All your design time can be dedicated to, like Werner’s slide says, software that directly addresses business issues. So, given more design time, you’re probably gonna get a better design with serverless.

My feeling is, the why of serverless is pretty obvious. It’s the how that’s interesting.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSF-1: What Is Serverless? 10 Dec 2018, 3:00 pm

I don’t think it’s that complicated: If you can’t see the servers in the service, then it’s serverless. Yeah, they’re still there, but the whole point is that you can mostly not worry about them.

[This is part of the Serverlessness series.]

That doesn’t work just for functions: Lots of services, for example RDS (in certain flavors), ElasticSearch, and Amazon MQ, require that you pick an instance that the service runs on before you start. Let’s extend the nomenclature a bit; if anything requires that you pre-provision IOPS (or other unit of capacity), then that’s not so serverless.

Launching an Amazon MQ broker

It’s a great service, but it’s not serverless.


Several people, starting with Werner Vogels, have griped about the name “serverless”, finding it ill-conceived and pointed in the wrong direction, preferring positive rather than negative names. Fair enough, but too bad, English is not just a living language but a hasty, impulsive, pigheaded one, and it’s settled on “Serverless” now, so just deal.

By the way, any fool can plainly see that the opposite of Serverless is Serverful, with just one trailing “l”, just like faithful, sinful, and skillful.

It’s not binary

I heartily recommend Ben Kehoe’s The Serverless Spectrum in support of this point. Let me illustrate by example: Your basic enterprise Oracle deployment is the most serverful thing imaginable; with the primary, fallback, and standby instances, it’s triply serverful! RDS Aurora Serverless is more serverless, and while DynamoDB has always been even more serverless, with the advent of On-Demand mode, it’s really serverless.

Another example: An EC2 instance is hardly serverless, a Docker container launched in Fargate is really pretty serverless, while a Lambda function is the epitome of serverlessness.

I think we are past the time to argue about what “serverless” means.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconServerlessness 9 Dec 2018, 3:00 pm

I work in AWS’s Serverless group, and in the process of pulling together my presentation at re:Invent, discovered that I have a lot of opinions on the subject and, while they may well be wrong, are at least well-informed. You can watch that YouTube, but who’s got an hour to spare? And anyhow, blogging’s really my favorite medium, so here we go. If I tried to glom them all together into one mega-essay it’d be brutally long, so let’s go short-form.

The Serverless Fragments

“Serverless Fragment” has five syllables and ongoing doesn’t have that much room at the top of the page for the title, so let’s say “SF”.

  1. SF 1: What is serverless? Tl;dr: It’s when you don’t have to reserve capacity before you start. But it’s not a binary condition.

  2. SF 2: Why serverless? Tl;dr: Frugality, Security, Elasticity. That part’s a no-brainer, but will you also get better software?

  3. SF 3: Serverless everything? Tl;dr: Nope, serverless where possible. To start with, think about control planes and data planes.

  4. SF 4: Serverless latency? Tl;dr: It’s complicated. And, there are things to do about it.

  5. SF 5: Serverless bills? Tl;dr: Not always lower. But it depends how you count it.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconStep Functions Integration 30 Nov 2018, 3:00 pm

On Thursday we launched some add-ons for AWS Step Functions, on which I helped a bit. As usual, there’s a nice Jeff Barr blog. This is to add design notes and extra color.

Our announcement describes these as “Integrations” — internally, while we were building them, we called them Connectors, and I’m going to stick with that because it has one less syllable and feels idiomatic.


Up till now, Step Functions knew how to hand work to Lambda functions and to polling “Activity Workers”. As of now, it can also make use of DynamoDB (read/write), Batch (start a job in either fire-and-forget or wait-for completion mode), ECS (regular and Fargate flavors), SNS (write-only), SQS (write-only), Glue (like Batch, async or sync), and SageMaker (same).

Of course, you could do all this before, by running a little Lambda function to call whatever API, but now Step Functions knows how to make those calls directly. Which means fewer Lambdas to own and maintain. Also this should run a little faster without an interposed function, and finally, Step Functions can be smarter about dealing with retries and throttling and so on.

How it works

Nothing essential in the Amazon States Language has changed. Just as before, you use a Task State to get work done, and you identify the work in the value of the Resource field, which is a URI. Used to be, the only URIs recognized were Lambda ARNs and Activity-worker Task ARNs. Yeah, as far as I know, nobody’s ever registered AWS’s “arn” URI scheme, but for all practical purposes they’re perfectly good URIs.

So all we really had to do to make this work was teach Step Functions to recognize new flavors of ARNs: One for each of the operations I mentioned above. For example, the Resource value that requests fetching a DynamoDB item is arn:aws:states:::dynamodb:getItem. All the other stuff in Task states about Retriers and Catchers and so on goes on working just as it did before.

This notion of short strings that identify a “unit of information or service” is a straightforward use of URIs, and shouldn’t be surprising to anyone who understands how the Web works.

In most cases, the implementation is simple enough; the service just feeds the appropriate input data to the indicated API. But a couple of the new Connectors go further, for example running a Batch job in synchronous mode. It turns out that the Batch service only has a fire-and-forget API, so what the service does in this case is write a rule into the caller’s CLoudWatch Events account which catches Batch’s job-finished event and routes it to an SQS queue, which Step Functions has a long-poll posted on to find out when the work is done.

Once again, customers could previously have set this up for themselves (and in fact some have) but it just makes more sense to offer it as a built-in.


Step Functions has always had a tool, the “InputPath” field, to filter incoming input and select bits and pieces of it to feed to workers. When we started working on the first few Connectors, we realized it wasn’t quite up to the task of assembling the correct input for an arbitrary collection of API calls. We were at risk of replacing the dumb little Lambdas that existed just to call APIs with even dumber little Lambdas that just wrangled JSON into the right shape to call the API.

Thus the States Language’s brand-new “Parameters” field. To explain this, I’m going to re-use the example from Jeff’s blog linked above:

 1 "Read Next Message from DynamoDB": {
 2      "Type": "Task",
 3      "Resource": "arn:aws:states:::dynamodb:getItem",
 4      "Parameters": {
 5        "TableName": "StepDemoStack-DDBTable-1DKVAVTZ1QTSH",
 6        "Key": {
 7          "MessageId": {"S.$": "$.List[0]"}
 8        }
 9      },
10      "ResultPath": "$.DynamoDB",
11      "Next": "Send Message to SQS"
12    }

You can see the magic read-from-Dynamo ARN there in the Resource field on line 3. But it’s the Parameters field value that’s interesting. It has the right JSON shape to hold the GetItem API arguments, but buried down in the Key field on line 7 there’s a little weirdness going on. It turns out that DynamoDB wants you to pass a string argument by sending JSON that looks like:

"S": "My own personal string key"

In line 7, you see a field whose name is “S.$” and value is “$.List[0]”. That “.$” suffix is the new thing; whenever you see that in the Parameters block, it means that the value is to be interpreted as a JSONPath, applied to the state’s input, and then the whole field is replaced by one whose name is the same with the “.$” suffix subtracted, and whose value is whatever you got from the JSONPath.
[Tim, what if their API already uses a field whose name ends in “.$”? -Ed.]
[That would be unfortunate. -T]

We think this should provide people with most of what they need to compose the arguments for almost anything you might want to invoke. By the way, the Parameters-block idea wasn’t mine, it was cooked up by folks on the Step Functions team, notably Ali Baghani. And because it’s not mine, I can say: Way cool!


If you go to the console and set up a state machine like the one in the example, we can do an extra trick, namely look at all the Connectors in your machine, figure out what permissions you need to make those calls, and synthesize a Role for you, designed to be used in running that state machine.

API mapping?

Now, if you look at the way we’ve provided mappings for a few of the AWS APIs, you might reasonably wonder “Why not all of them?”. After all, I notice that Diego ZoracKy recently published a general-purpose Lambda function that does just that — give it the API name and the right arguments and it’ll do the call.

It’s not a crazy idea, but we’re not going to do it. Blindly calling APIs without having thought it through a little could be a recipe for unhappiness. We want to make sure that when we make those calls, we’re being sensible about buffering, polling mode, retrying, checking for impossible arguments, and so on. For example, we support sending a message to SQS, but receiving one will require some head-scratching about whether and when to delete after a successful read.

Also, in some cases we might want to do prep work, as we already do to make those asynchronous fire-and-forget APIs look synchronous when you put them in a state machine.

So, we’re going to reserve the right to add Connectors at our own speed and in a thoughtful way.

Future directions

The fact that we identify workers with URIs leaves the door open for any kind of future Connector you can think of. There are lots of obvious candidates in the AWS SDK, to start with. One especially obvious one is supporting the ARNs of other Step Functions state machines, giving you nested child-workflow invocations.

Another is allowing the use of any old HTTP endpoint URL, so your state machine could talk to any Web API in the universe. I suppose we’ll need to add an "HTTP-method" field or equivalent to specify whether you want GET or POST.

Anyhow, there are lots of Step Functions features on the road map that aren’t just Connectors; but I suspect that we’re going to come under pressure to keep adding them, starting today and going on more or less forever.

States-Language housekeeping

The States Language spec has been updated, and so has its source on GitHub. So has the statelint state-machine validator (and the Ruby Gem updated).

All these years into my career, I still get a little thrill being part of the Launch Day dance, a few little pushes with my own pinkies.

I tested my parts pretty hard but there might be bugs; I do look at pull requests and have taken some in the past. In particular I have to say thank-you for jezhiggins’ updates to the supporting j2119 parser generator that made it possible to add Parameters validation to statelint.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconPost-REST 18 Nov 2018, 3:00 pm

More or less all the big APIs are RESTful these days. Yeah, you can quibble about what “REST” means (and I will, a bit) but the assertion is broadly true. Is it going to stay that way forever? Seems unlikely. So, what’s next?

What we talk about when we talk about “REST”

These days, it’s used colloquially to mean any API that is HTTP-based. In fact, the vast majority of them offer CRUD operations on things that have URIs, embed some of those URIs in their payloads, and thus are arguably RESTful in the original sense; although these days I’m hearing the occasional “CRUDL” where L is for List.

At AWS where I work, we almost always distinguish, for a service or an app, between its “control plane” and its “data plane”. For example, consider our database-as-a-service RDS; the control plane APIs are where you create, configure, back-up, start, stop, and delete databases. The data plane is SQL, with connection pools and all that RDBMS baggage.

It’s interesting to note that the control plane is RESTful, but the data plane isn’t at all. (This isn’t necessarily a database thing: DynamoDB’s data plane is pretty RESTful.)

I think there’s a pattern there: The control plane for almost anything online has a good chance of being RESTful because, well, that’s where you’re going to be creating and deleting stuff. The data plane might be a different story; my first prediction here is that whatever starts to displace REST will start doing it on the data plane side, if only because control planes and REST are such a natural fit.

RESTful imperfections

What are some reasons we might want to move beyond REST? Let me list a few:


Setting up and tearing down an HTTP connection for every little operation you want to do is not free. A couple of decades of effort have reduced the cost, but still.

For example, consider two messaging systems that are built by people who sit close to me: Amazon SQS and MQ. SQS has been running for a dozen years and can handle millions of messages per second and, assuming your senders and receivers are reasonably well balanced, can be really freaking fast — in fact, I’ve heard stories of messages actually being received before they were sent; the long-polling receiver grabbed the message before the sender side got around to tearing down the PutMessage HTTP connection. But the MQ data plane, on the other hand, doesn’t use HTTP, it uses nailed-up TCP/IP connections with its own framing protocols. So you can get astonishingly low latencies for transmit and receive operations. But, on the other hand, your throughput is limited by the number of messages the “message broker” terminating those connections can handle. A lot of people who use MQ are pretty convinced that one of the reasons they’re doing this is they don’t want a RESTful interface.


In the wild, most REST requests (like most things labeled as APIs) operate synchronously; that is to say, you call them (GET, POST, PUT, whatever) and you stall until you get your result back. Now (speaking HTTP lingo) your request might return 202 Accepted, in which case you’d expect either to have sent a URI along to be called back as a webhook, or to get one in the response that you can poll. But in all these cases, the coupling is still pretty tight; you (the caller) have to maintain some sort of state about the request until the caller has done with it, whether that’s now or later.

Which sort of sucks. In particular when it’s one microservice calling another and the client service is sending requests at a higher rate than the server-side one can handle; a situation that can lead to acute pain very quickly.

Short life

Handling some requests takes milliseconds. Handling others — a citizenship application, for example — can take weeks and involve orchestrating lots of services, and occasionally human interactions. The notion of having a thread hanging waiting for something to happen is ridiculous.

A word on GraphQL

It exists, basically, to handle the situation where a client has to assemble several flavors of information do its job — for example, a mobile app building an information-rich display. Since RESTful interfaces tend to do a good job of telling you about a single resource, this can lead to a wasteful flurry of requests. So GraphQL lets you cherry-pick an arbitrary selection of fields from multiple resources in a single request. Presumably, the server-side implementation issues that request flurry inside the data center where those calls are cheaper, then assembles your GraphQL output, but anyhow that’s no longer your problem.

I observe that lots of client developers like GraphQL, and it seems like the world has a place for it, but I don’t see it as that big a game-changer. To start with, it’s not as though client developers can compose arbitrary queries, limited only by the semantics of GraphQL, and expect to get uniformly decent performance. (To be fair, the same is true of SQL.) Anyhow, I see GraphQL as a convenience feature designed to make synchronous APIs run more efficiently.

A word on RPC

By which, these days, I guess I must mean gRPC. I dunno, I’m old enough that I saw generation after generation of RPC frameworks fail miserably; brittle, requiring lots of configuration, and failing to deliver the anticipated performance wins. Smells like making RESTful APIs more tightly coupled, to me, and it’s hard to see that as a win. But I could be wrong.

Post-REST: Messaging and Eventing

This approach is all over, and I mean all over, the cloud infrastructure that I work on. The idea is you get a request, you validate it, maybe you do some computation on it, then you drop it on a queue (or bus, or stream, or whatever you want to call it) and forget about it, it’s not your problem any more.

The next stage of request handling is implemented by services that read the queue and either route an answer back to the original requester or passes it on to another service stage. Now for this to work, the queues in question have to be fast (which these, days, they are), scalable (which they are), and very, very durable (which they are).

There are a lot of wins here: To start with, transient query surges are no longer a problem. Also, once you’ve got a message stream you can do fan-out and filtering and assembly and subsetting and all sorts of other useful stuff, without disturbing the operations of the upstream message source.

Post-REST: Orchestration

This gets into workflow territory, something I’ve been working on a lot recently. Where by “workflow” I mean a service tracking the state of computations that have multiple steps, any one of which can take an arbitrarily long time period, can fail, can need to be retried, and whose behavior and output affect the choice of subsequent output steps and their behavior.

An increasing number of (for example) Lambda functions are, rather than serving requests and returning responses, executing in the context of a workflow that provides their input, waits for them to complete, and routes their output further downstream.

Post-REST: Persistent connections

Back a few paragraphs I talked about how MQ message brokers work, maintaining a bunch of nailed-up network connections, and pumping bytes back and forth across them. It’s not hard to believe that there are lots of scenarios where this is a good fit for the way data and execution want to flow.

Now, we’re already partway there. For example, SQS clients routinely use “long polling” (typically around 30 seconds) to receive messages. That means, they ask for messages and if there aren’t any, the server doesn’t say “no dice”, it holds up the connection for a while and if some messages come in, shoots them back to the caller. If you have a bunch of threads (potentially on multiple hosts) long-polling an SQS queue, you can get massive throughput and latency and really reduce the cost of using HTTP.

The next two steps forward are pretty easy to see, too. The first is HTTP/2, already widely deployed, which lets you multiplex multiple HTTP requests across a single network connection. Used intelligently, it can buy you quite a few of the benefits of a permanent connection. But it’s still firmly tied to TCP, which has some unfortunate side-effects that I’m not going to deep-dive on here, partly because it’s not a thing I understand that deeply. But I expect to see lots of apps and services get good value out of HTTP/2 going forward; in some part because as far as clients can tell, they’re still making, and responding to, the same old HTTP requests they were before.

The next step after that is QUIC (Quick UDP Internet Connections) which abandons TCP in favor of UDP, while retaining HTTP semantics. This is already in production on a lot of Google properties. I personally think it’s a really big deal; one of the reasons that HTTP was so successful is that its connections are short-lived and thus much less likely to suffer breakage while they’re at work. This is really good because designing an application-level protocol which can deal with broken connections is super-hard. In the world of HTTP, the most you have to deal with at one time is a failed request, and a broken connection is just one of the reasons that can happen. UDP makes the connection-breakage problem go away by not really having connections.

Of course, there’s no free lunch. If you’re using UDP, you’re not getting the TC in TCP, Transmission Control I mean, which takes care of packetizing and reassembly and checksumming and throttling and loads of other super-useful stuff. But judging by the evidence I see, QUIC does enough of that well enough to support HTTP semantics cleanly, so once again, apps that want to go on using the same old XMLHttpRequest calls like it was 2005 can remain happily oblivious.

Brave New World!

It seems inevitable to me that, particularly in the world of high-throughput high-elasticity cloud-native apps, we’re going to see a steady increase in reliance on persistent connections, orchestration, and message/event-based logic. If you’re not using that stuff already, now would be a good time to start learning.

But I bet that for the foreseeable future, a high proportion of all requests to services are going to have (approximately) HTTP semantics, and that for most control planes and quite a few data planes, REST still provides a good clean way to decompose complicated problems, and its extreme simplicity and resilience will mean that if you want to design networked apps, you’re still going to have to learn that way of thinking about things.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconCar Capitalism 13 Nov 2018, 3:00 pm

What happened was, I was hurtling around a mall parking lot in a beautiful British-designed hundred-thousand-dollar sports car, and I thought “Is this the good side of capitalism?”

I ♥ Cars

Disclosure: I like driving sufficiently well to have written, ten years ago, an encomium on the subject that includes a police takedown and a poem.

And there are lots of things to like about the business. It produces products across a huge ranges of prices that work pretty well — better every year, in fact — and last a long time, and about which people have strong aesthetic feelings.

There’s no suggestion of monopoly; competition is fierce and it’s possible for new companies to grab a foothold. The industry tends to place value on its workers, paying them and treating them reasonably well. They do not, at least mostly, have bullshit jobs.

Also, cars address humans’ naturally nomadic nature; there is a special joy in getting on the road and heading out in any direction you damn well please, as far as the road goes. Making that possible really just can’t be a bad thing.


Automobiles have had to be regulated fiercely almost from day one: Their speeds, where they can drive and park, the safety standards on their tires and electronics and brakes and crumple zones and seatbelts and child seating, and of course emissions. The notion of a laissez-faire auto industry is laughable.

And given the slightest chance, car companies lie, cheat, and steal. For example, the recent “dieselgate” scandal played out against a backdrop of nudge-nudge wink-wink regulatory capture where everyone knew that any given car emitted more and got worse mileage than it said on the label. Sometimes the corruption was laughably public, as with the US regulators classifying shitboxes like the PT Cruiser as “trucks” so they could skate around emission regulations.

Not to mention the resistance, in recent years, to looking seriously at electric cars. In the face of terrifying climate-change predictions, the industry did the absolute bare minimum they were forced to. Only now, under combined pressure from global regulators and Tesla engineering, are they showing signs of taking it seriously.

Your point is?

I’m a left-winger and somehow still like a lot of things about business: The drive to figure out what people need and want and get that to them; the labyrinthine fascinations of marketing and sales; drama in trying something out that might not work; satisfaction of being on a well-functioning team.

But yeah, the auto industry is the nice end of the private sector. So much of business is poverty-by-policy, bullshit jobs, institutionalized mismanagement, work-life balance seen as a failing, egregious sexism, corruption of the public sector, and hyperentitled one-percenters who are so, so sure that they earned it all with their own hard work, deserve every penny, and the 99% are just losers who deserve what they’re getting. [Me, I got lucky and know so many people who are smarter than me and work harder and are struggling to make ends meet. Why is that so hard to admit?]

I’m an optimist. I think we can find a better and more balanced way to build an economy and, in the fullness of time, will. And I hope we can still have cool cars.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconCar-Charge Economics 4 Nov 2018, 3:00 pm

The Wikipedia article on Electric Car Use by Country is interesting. Below I excerpt a graph (misspellings: theirs) of the leading electric-car jurisdictions: As I write, Norway leads, at over 20%, while the US average is 1.5%. (Visit the Wikipedia link for the latest whenever you read this.) How are all these cars going to be fed? Let’s consider the future business of car-charging.

Top countries in adoption of battery electric vehicles

My own angle

Since I’m about to become an electric-car owner, I’ve been pre-planning trips, both for work (i.e. to Seattle) and to visit family elsewhere in Western Canada. And I’m having a feeling I last had in the Nineties, as a bleeding-edge traveling Internet user. Back then, when you picked your hotel, you really cared about whether your dial-up Internet would work — there were certain 20th-century “digital” hotel phone systems that got in the way, and then some places had proprietary plugs, and others blocked calls to the local PoP because they thought you were trying to dodge their larcenous long-distance charges, which you were.

As a side-effect of this, I’ve learned a lot about what kinds of chargers there are, and it raises questions in my mind of how we get the ones we need, and (chiefly) who’s going to pay for them.

Defining terms

  1. A BEV is a Battery Electric Vehicle. Also you sometimes hear PEV where P is for Plugin.

  2. There are a bunch of ways to talk about how fast a charger charges your BEV, but I don’t think there’s a standard acronym for my favorite, how many km of range you get per hour of charging. Let’s use kRh for “km of range per hour”. American and British readers can divide by 1.6 and call those mRh.

  3. A Level 1 (L1) charger means plugging straight into your home current, either 240 or 110 volts depending where you are in the world. This is an unsatisfactorily slow way to charge a BEV, a handful of kRh.

  4. An L2 charger is what many people install at home when they buy a BEV. Ways to measure it include kW (6 or 7), amps (30-ish), and you might get 30 or so kRh. The idea with an L2 is, you plug in your BEV while you sleep, and it’ll be charged when you want to head out in the morning.

  5. An L3 charger is what you find in Tesla’s Superchargers network, and recently other networks such as Ionity in Europe. Don’t know the Tesla details, but the majority of publicly accessible ones in late 2018 run at 50kw or so, which is to say probably better than 200 kRh.

    The Jaguar I’ve ordered is advertised as being able to charge 80% in 40 minutes on a 100kW charger (of which there are approximately zero as I write), which my arithmetic suggests is like 450 kRh. Now, it’s more complicated than that, because it’s actually amperes that charge your car, which is a function of the upstream source plus circuits both in your charger and in your car. And it’s more complicated than that because fast chargers charge cars fast, but only for the first 80% or so of capacity, then they slow down. So the polite thing to do at a fast highway charger is to charge up to only 80%. For what it’s worth, there’s excited talk about higher and higher charger ratings, Ionity claims they’ll be shipping 350KW chargers: “Stop,drink a coffee, and go.”


A lot of people put in L2 chargers at their residences. They cost under a thousand bucks, but you can’t install one yourself, so for most people, by the time you’ve paid the electrician and so on you’re probably in for over a grand. I suspect these costs will come down, but not hugely; volume will go up, which will help, but nobody’s predicting big technology breakthroughs. Having said that, a thousand bucks may be economically tolerable when you consider the trips to the fuel pump you’re avoiding.

An L3 charger is another story. This useful page at OhmHome suggests you’re looking at $50K and up, possibly way up. Among other things, you have to run three-phase power to the site, and you have pay a highly skilled professional to do the installation because at this power rating, mistakes are apt to be lethal. In conversations before I ran across the OhmHome site, I’d heard typical costs north of $100K, and some really extravagant numbers for the cost of the Supercharger stations.

So, given all that, who should build chargers, and where?

Hotel and residential

I think this one’s pretty easy. Hotels and residential developments should try to have a number of charge stations corresponding to the local proportion of electric cars. Except for, they should start with maybe twice the current value, because the proportion of electric cars being sold is way higher than that already out there. And I suspect that in places like hotel and condo garages, the cost of installing ten is way less than ten times the cost of installing one. These should be L2 for charging while sleeping; there’s no good reason to pay up for fast chargers.

A word of warning to hotel operators and residential developers: The time is very near where I won’t consider your hotel or your condo if I can’t be confident of charging while I sleep.


This is an interesting one. Lots of office buildings (including Amazon’s) have car chargers in the basement parking. But so far, near as I can tell, they all seem to be L2. I’m not sure I see the point; even if you could hog the charger all day while you work, you probably wouldn’t get a full charge. Maybe it’s useful for people who have a short commute and don’t have a charger at home? And in fact most of these things are un-used when I drive by them, in Vancouver and Seattle. There might be a case for L3 chargers at HQ for people like me who occasionally drive down to Seattle and back in a day (I’ve done it, it sucks) but the current L2 deployment seems wasted.

Roadside attractions

Now, here’s where it gets interesting. When you’re doing an extended long-distance drive, you really need fast chargers or you’e going to be ridiculously, laughably slower than with a fossil-fuel car. So the place for them is by the highway. Who’s going to pay for them? Especially given the high cost?

I originally thought that coffee shops would be the natural homes for these things, add a charger and attract the crowds, but at $100K I don’t think the economics work. But here are a few other interested parties who might have an interest in making the investment to put a fast charger near a big road:

  • Malls; the scale is presumably large enough that the investment looks more tractable, and they have an interest in keeping you parked for a while once you’ve arrived.

  • Chambers of commerce; put a charger near the middle of a small town’s roadside shopping street. This is a variation on the mall theme.

  • Car companies, emulating Tesla’s strategy of using a charging network to help sell a brand of car. I’ve heard rumbles that Volkswagen is thinking of this, and they certainly have the scale.

  • Governments, interested in trying to meet their carbon-load reduction targets.

  • Electric utilities, trying to convince lots of people to buy electric cars. Since the vast majority of electric cars spend their time shuttling people back and forth to their place of work, the utility probably doesn’t need to charge enough to recoup the investment. In other words, the chargers serve a psychological function, reassuring people that if they have the urge to drive across a couple of time zones to visit the family for Thanksgiving, that’ll be no problem.

The future

One way or another, I bet there are going to be plenty of chargers out there. Just like today I don’t have to worry much about whether the hotel I’m going to has Internet.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconJag Diary 4: Marketing Tour 27 Oct 2018, 3:00 pm

What happened was, Lauren and I played hookey from work and took in Jaguar/Land Rover’s Art of Performance tour, and it was a total blast, a couple hours of pure fun. This is just a recommendation for the show plus a few things I’ve learned about the car (which remains super interesting) since the last Jag-Diary entry.

The Tour

If it’s coming anywhere near you, I recommend signing up and going; near as I can tell, the only requirement is that you have a driver’s license. It was in a big boring suburban mall parking lot. They started with good coffee and hors d’oeuvres in a tent, and a bunch of pretty Jags and Range Rovers outside in the parking lot, all unlocked so you could get in and fool around. I can’t tell one Range Rover from another, but there was this one the size of a small nation-state, and I mean just the back seat.

Back seat of a large Range Rover

Not like the beat-up old rattler we had on the farm.

We went in for an intro lecture, which was given by this charming dude who totally loved cars; early in his remarks he said “Our products are things that absolutely nobody really needs”. In maybe fifteen minutes we got the history of Jaguars, which is pretty interesting; also of Land Rover; like quite a few greybeards with a rural background, I have a memory of the farm Land Rover, the old kind with the sideways seats in the back. The new ones aren’t like that at all. The host was actually a little sarcastic: “We build these vehicles that can go everywhere and do everything, but I guess it’s OK that a lot of their owners don’t go anywhere or do anything”.

They showed some history-of videos, which were lavishly produced, with voiceover in ludicrously-plummy British toff accents. In which the pronunciation of the word “Jaguar” is ludicrous: JAAYG-YOU-AWW. I use a gruff North American JAG-WAHR. Neither is etymologically sound; Wikipedia tells me that the name (of the cat, obviously) derives from a Tupian word and was something like “yaguareté”.

The staff were uniformly charming, cheerful, and genuinely unironically enthusiastic about their love of cars.

The first demo was riding around in a pair of Land Rovers that they took up over and around the sides of purpose-built obstacle, tilting sideways at an angle of 27° (feels terribly dangerous) and up over an odd-shaped construct that left the car balancing on two wheels. Very cushy. Yawn. Over on Twitter, Mark Pedisic posted some pix of the event, including a Range Rover up in the air.

Then we took turns driving F-Types around this big parking lot. There were pairs of cones all over with lights on top, which lit up in random sequence and you had to drive through the ones with the lights on, getting scored on speed, precision, and distance (less is better). You had a driver in the passenger seat who yelled “Left! Hard right! Boot it! U-turn right!” and so on. I pretty well totally sucked, going through at least one gate backward. Never have been any good at following instructions.

The F-Type is a blast though, a two-seater that is somewhat Porsche-inspired in that it has no decoration, just shape. Its engine sounds like a dragon’s cough, there’s plenty of kick, and it loves being flung into a corner.

Jaguar F-Type at the Art of Performance Tour Jaguar F-Type

Then we walked over to another part of the parking lot where they had the I-Paces, which we drove around a course laid out in red traffic cones, no lights or anything. The I-Pace isn’t quite as agile into the corners as the F-Type but it’s still superb, and OMG it has twice the kick coming out of the corner and when you stomp the accelerator you can’t help but grin ear-to-ear. Also, the silence is eerie. The seats were divine. I thought it was way more fun to drive than the F-Type. I can’t wait to get mine.

Anyhow, if you like cars and you get a chance, go take in the show.

Leaping Jaguar logo Snarling Jaguar logo

Above: Leaping cat. Below: Grumpy cat.

More things we know

  • The I-Pace is somewhere between 20 and 30 percent less efficient at turning electrons into kilometers than its Tesla competition. Which means it is still vastly, hugely more efficient at turning units of global carbon loading into km than internal-combustion engines; “fossil cars”, we BEV (battery electric vehicle) geeks say. To start with, fossil car efficiency, in terms of turning the energy available in the fuel into km on the road, tops out around 30%; BEVs get 90% or more. The carbon load depends a lot on that of your local electricity grid. Which is to say excellent up here in the Pacific Northwest.

  • When you’re discussing electric cars, you can talk about kWh/100km or Wh/km; I prefer the latter. Modern BEVs get numbers between 200 and 250 Wh/km.

  • The Jag’s effective range, for typical driving patterns, is somewhere around 225 miles, 375 km.

  • Android Auto and iOS CarPlay now run fine on the I-Pace. At the moment, you have to put Android Auto into developer mode, go into the Developer menu, and enable 1080p output, or it looks junky.

  • Earlier reviews said that the infotainment was laggy and clumsy. My personal experience of it was fine, so they must have fixed it.

  • I threw it around the little track pretty hard, to the extent that on one straightaway the jovial Jag guy in the passenger seat exclaimed a word of caution. At the end of that straightaway, I tried to take the almost-180° turn with just the regenerative braking for slowdown, and I think it could have worked but my nerve failed me and I hit the brake. Fun!

  • The big modern electrics that are starting to arrive (Jag, Audi, Mercedes, Porsche) can charge from chargers delivering 100KW and up. Here in Western Canada, the “Fast DC” chargers only give 50KW. My calculations suggest that such a charger will add around 220km per hour of charging.

  • At the moment, long-distance trip planning in a BEV is a complicated thing. If you want to minimize your travel time, you have to plan ahead to figure out which chargers you’re going to stop at, and how long you’ll spend at each — for a variety of reasons, you don’t want to go all the way to 100%. There are apps for that.

  • Speaking of which, I have looked at ChargeHub, ChargePoint, Flo, Greenlots, and PlugShare. Of those, PlugShare is by far the best; in particular, its browser version does a great job of providing filters that you can use to see which chargers on the road ahead are appropriate for your use.

More later, when I have one.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconRetiring? 25 Oct 2018, 3:00 pm

I’m not young and I can afford to stop working. I’m wondering if I should.

Reasons to retire

  1. Some mornings, I feel like sleeping in.

  2. And then, when I get up, I’d like to spend two or three hours on Feedly and The Economist, just reading what’s going on in the world.

  3. I’d like to spend more summer time at my cabin.

  4. When I’m engaged in work I bring a whole lot of intensity; not significantly less than a few decades ago, I think. But at the end of the day, man, I’m so tired. Some days I can hardly scare up evening conversation with the family.

  5. Progressive friends, people whose opinions I respect, give me shit about working for Amazon. I claim that the problem is capitalism, flaccid labor laws, and lame antitrust enforcement, not any particular company; maybe I’m right.

  6. I want to write a truly great Twitter client for Android Auto, to keep me informed as I cruise down the road.

  7. I want to start working full-time on AR now so that I’ll have something cool running when the hardware becomes plausible. I have a couple of fabulous app ideas; nothing that would make any money, but I’m OK with that.

Reasons to keep working

  1. I get to write software that filters and routes a million messages a second.

  2. I’m in a position where it’s really hard for people not to listen to my opinions about technology. I’d become amazingly uninteresting about fifteen seconds after retiring.

  3. I like computers, and so it makes sense to work for (what I assume must be) the world’s largest provider of computers to people and businesses who use them.

  4. I get a chance to move the needle, a little, on the way people use computers.

  5. The people at work are interesting and nice; basically none of them get on my nerves.

  6. I learn things all the time about how to think about how to use computers.

  7. The Vancouver tech scene needs an anchor tenant and it’s cool to be helping build one.

  8. The money’s good.

When my Dad retired, he really hadn’t made any plans for what he was going to do with his time, so he didn’t do much, and that was bad; he went downhill really fast. I don’t have plans enough just yet to hit that Eject button.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconOn Weed 20 Oct 2018, 3:00 pm

Last Thursday, cannabis became legal in Canada. For example, here’s my local provincial government’s online cannabis store (screenshot below). There are going to be physical storefronts too, some private-sector, but the licensing process is slow so there aren’t any in Vancouver yet, except of course for the dozens of “dispensaries” that have been up and running for years; I suppose some of them will become legal. Which is to say, it hasn’t been very dramatic. But I think it is sort of a big deal.

BC Cannabis Store

It’s a big deal because it’s an example of democracy actually working. We had a legal framework whose goals — stamp out pot — were not only unachievable but unsupported by evidence. In fact, the support was negative: evidence showed that the previous policy’s effects were, on balance, harmful. And one of our major political parties decided to run on an evidence-based legalization platform, won the election, and went ahead and did it.

Now, we still have a bunch of issues to sort out:

  1. Can legal weed achieve a level of price, quality, and convenience sufficient to drive the current thriving underground trade out of business?

  2. Is buzzed-out driving going to be a problem like drunk driving? Unlike alcohol, we totally don’t have good statistical data on what intoxication measurements correlate with elevated likelihood of accidents. And even if we did, we don’t have high-quality roadside tech for measuring it. There’s legislation in place, but everyone expects a legal/constitutional challenge more or less instantly after the first driving-while-high charge, and from what I read, that law is a pretty soft target.

  3. What are the appropriate cannabis-use limits? Should the legal age be the same as alcohol? For high-judgment jobs like airplane pilot, what is the cannabis equivalent of their traditional “24 hours bottle-to-throttle”?

  4. Where can you use cannabis legally? I fervently support the draconian restrictions on tobacco smoking, but at least half the justification is tobacco’s addictiveness and lethality. And I seem to recall from the seventies that people really liked to get high socially; should there be the cannabis equivalent of licensed public houses? Should they be licensed public houses?

The really interesting question, though, is who’s going to use pot, and how much? I was a college student back in the Seventies and my recollection is that:

  1. Most people did, except for those who also didn’t drink and were just naturally abstemious.

  2. The real “heads” did all the time and were thus not very effective as students or employees, and in some cases really screwed up their lives, and some of those stumbled off into the badlands of speed and opioids and so on, and some of those died of it. But I think that was just them, the cannabis wasn’t the important part of the story.

  3. After a few years I started hearing people griping that weed was just making them feel stupid and paranoid.

  4. Sometime around 1980 almost everyone I knew stopped for one reason or another, often including the discovery of a vocation: microbiology or computer programming or finance or whatever.

Me, I’m strongly convinced legalization is a step forward. People are gonna use weed, and I think it’s a fine thing that they’ll be able to get it with clearly-labeled believable levels of THC and CBD, and minus random pesticides. Because most dope dealers are skanky people you shouldn’t trust.

If you look at history, among the first public servants were the people who inspected the brewers and pubs of Europe to verify that people could trust the advertised strength of beer and advertised size of the mug it came in. So we’re on familiar ground here.

But I do wonder what social patterns will emerge, now that weed’s legal and regulated? The change feels small now, but I’ve no notion how it’ll look in the rear-view in a decade or two.

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconJag Diary 3: What We Know 7 Jul 2018, 3:00 pm

Between June 4th, when the first wave of reviews of the New Jag hit (offically the I-PACE, what a dumb name) and the time the salesman called me saying “Time to sign the order if you want to be in the first wave”, I had to decide whether to spend a lot of money on a car I’d never seen or touched. So I paid damn close attention to those reviews. I’m a critical reader, and suspicious about the motives of product reviewers, and I think the picture that emerges is pretty clear. This post is to enumerate what I think it’s possible to know for sure about the car without having owned or even driven one. [Updated based on hands-on experience.]

I’ll throw in a bunch of links down at the bottom to reviews that I think are particularly useful.


  • The story starts in 2014, when Jag leadership decided to go all-in on a from-scratch electric model. They put an integrated development team all in one room at the University of Warwick — not exactly traditional auto-biz practice — and eventually brought the new car from nothing to market in “only” four years, which is considered very good in that industry.

  • It has two motors, one wrapped round each axle, with the space between full of battery, then the cabin perched on top. At moderate speeds, only the back wheels drive.

  • It’s almost all aluminium and, despite that, is still super-heavy (2100kg), mostly because of the battery.

  • I’m not going to recite horsepower and torque numbers that I don’t understand, but people who do understand them sound impressed.

  • I don’t understand charging issues well enough to have an intelligent opinion, but Seth Weintraub does, and his review is full of useful detail. Tl;dr: The range is competitive with other high-end electrics.

  • It doesn’t have gears as such, just buttons: P, N, R, D. The North American edition comes only with air suspension, and has a thing where you can elevate the car for a tricky driveway or rutted gravel, and it settles down automatically at high speeds. I gather the Euro model can be bought with springs.

  • Another difference: The Euro model comes with either a standard or glass roof; in the New World it’s all-glass all the time. Personally, I’d prefer a layer of metal between me and the sun, but they claim it’s sufficiently shaded and UV-impervious.

  • Electrics are super quiet inside so, if you want, the Jag will play you a spaceship-y acceleration sound that changes with the speed. Fortunately it’s optional; although one of the journos who took it out on the racetrack said he found it useful in situations where you don’t have time to look at the speedometer.

  • There’s a screen behind the steering wheel where you can display speed and charge and maps and so on. Front center, there’s a biggish (but not Tesla size) screen above for Infotainment, and a smaller one below for climate control. On the subject of climate control, the console has a couple of actual physical knobs for that.

Black interior White interior
  • It’s got a fair-size trunk at the back (the back seats fold down 60/40) and a tiny one under the front hood; someone suggested it was just big enough to carry your cat.

  • As with most electrics, you can do one-pedal driving, where easing off the accelerator goes into regeneration mode and provides enough breaking for all but exceptional circumstances.

  • You can actually take it off-road, up and down stupidly steep hills, through really deep puddles, and so on: The “LR” part of JLR is Land Rover, and that part of the company knows something about those things.

  • There’s plenty of room inside for four big adults. The person in the middle of the back seat should be on the small side.

  • Nobody has seen either Apple CarPlay or Android Auto at work, but the company claims that both will be supported. My own Jag dealer said he’d heard that they’d done the technology work were just doing licensing and payment. [Hands-on: It works fine!]

  • It has a SIM slot and over-the-air software update.

  • You can equip it with a tow-bar and bike-rack and roof-rack.

  • It’s built, not by JLR themselves, but by Magna Steyr, a contract manufacturer in Graz, Austria, that also builds the Mercedes G-Class and BMW 5 Series.

Things that are good

  • Everyone agrees that it’s a blast to drive. What’s interesting is that the most common comment was “feels just like a Tesla”. The Top Gear scribe pointed out, in a melancholy tone, that apparently all electric motors feel more or less like all others. This is a big change from the days of internal-combustion engines, which have all sorts of personality. It’s fast, maneuverable, and comfortable. [Hands-on: Oh yeah!]

  • The one-pedal driving mode takes a bit of getting used to but all the journos ended up loving it, and assuming that pretty everyone would use it all the time.

  • The seats are said to be super-comfortable. [Hands-on: Yup.]

  • It has all the bells and whistles and technology gadgets anyone could want.

  • The cabin has all sorts of storage space in bins here and there and under the back seats and so on.

  • It has more than enough range for people who drive around town and then occasionally go 200+ km for business.

Things that are not so good

  • If you’re a road warrior, Jag doesn’t have anything to compete with Tesla’s supercharger network. I’ve started poking around PlugShare and ChargePoint and so on, and I think you could manage road trips, but it’s not going to be as slick as with a Tesla. Perhaps this situation will improve?

    Me, I have a carport on the back alley and I’ll put in a charger and I should be fine.

  • The infotainment system is slow and laggy, and some important settings are deeply nested into the menus. Android Auto is my answer to that. [Hands-on: The lag is not really an irritant once you get into the system’s rhythm but yeah, the menus could be better-organized.]

  • The storage space isn’t that well-organized and it’s not obvious where to stow the charging cables.

  • The fifth person in the car is going to be kind of cramped.

  • Visibility out the back window is lousy, with big rear posts getting in the way. [Hands-on: The window is tiny, nearly horizontal, and shaded, so the view is ludicrously bad. There’s a back-up camera to help with parking though.]

  • The brake pedal tries to combine regenerative and friction braking and as a result is said to feel soft and weird. [Hands-on: Don’t know what was bothering them, my leg likes it fine.]

  • The air-suspension ride has been reported as feeling a bit jittery and unstable at low/moderate speeds. [Hands-on: Nope, but I’ve found the regen braking can be a bit quease-inducing for passengers while driving in traffic.]

  • The center console crowds the driver’s leg a bit; more of a problem in left-hand drive vehicles, obviously. [Hands-on: Not at all, there’s plenty of room for my legs, and I’m 5’11".]

My conclusion

What happened was, when the first buzz of publicity hit in March I was interested enough to drop by Vancouver Jaguar and talk to Caleb Kwok, the sales manager. He’s a plausible guy, responsive to email, and anyhow, he convinced me to put down a refundable deposit, buying me a place near the front of the line at the time actual orders would open up. Which turned out to be last week.

By which time I’d read all the material summarized in this piece. On balance, I liked what I heard; the pluses were pretty big and none of the minuses bothered me that much. Remember, the longest trip I normally take is 230km to Seattle, where I park for a couple of days then drive home.

So I signed on the dotted line, and my deposit is no longer refundable.

The big worry, of course, is reliability and manufacturing quality. Jaguar, at various times in its history, has had a miserable reputation. Of one famous model, they used to say “It’s a great car, so buy two, because one will always be in the shop.” It’s worse than that; Jag at one point had a particularly stinky track record around electrical systems.

But there are stats suggesting Jag’s doing better in recent years. And then there’s the fact that it’s being built in a plant where they also make Mercedes and BMW. Granted, I’m taking a chance here.

Helpful reviews

Blinklist Blogmarks Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

Page processed in 0.711 seconds.

Powered by SimplePie 1.0.1, Build 20070719221955. Run the SimplePie Compatibility Test. SimplePie is © 2004–2019, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.