I've used AWS's SQS at several companies now. In general, it's a pretty reliable and performant message queue.
Previously, I'd used SQS queues in application code. A typical application asks for 1-10 messages from the SQS API, receives the messages, processes them, and marks them as completed, which removes them from the queue. If the application fails to do so within some timeout, it's assumed that the application has crashed/rebooted/etc, and the messages go back onto the queue, to be later fetched by some other instance of the application.
To avoid infinite loops (say, if you've got a message that is actually causing your app to crash, or otherwise can't be properly processed), each message has a "receive count" property associated with it. Each time the message is fetched from the queue, its receive count is incremented. If a message is not processed by the time the "maximum receive count" is reached, instead of going back onto the queue, it gets moved into a separate "dead-letter queue" (DLQ) which holds all such messages so they can be inspected and resolved (usually manually, by a human who got alerted about the problem).
That generally works so well that today we were quite surprised to find that some messages were ending up in our DLQs despite the fact that the code we had written to handle said messages was not showing any errors or log messages about them. After finally pulling in multiple other developers to investigate, one of them finally gave us the answer, and it came down to the fact that we're using Lambdas as our message processor.
So here's the issue, which you'll run into if:
Whatever Amazon process feeds SQS messages into that lambda will fetch too many messages. (I'm not sure if there's a way to tell if it was in a large batch, or lots of individual fetches in parallel, but either way the result is the same.)
Every time it does this, it increments the messages' receive counts. And of course when they reach their max receive count, they go to the DLQ, without your code ever having seen them.
This happens outside of your control and unbeknownst to you. So when you get around to investigating your DLQ you'll be scratching your head trying to figure out why messages are in there. And there's no configuration you can change that fixes it. Even if you set the SQS batch size for the lambda to 1.
If you think you might be running into this problem, check two key stats in the AWS console: the "throttle" for the lambda, and the DLQ queue size. If you see a lambda that suddenly gets very throttled which correlates with lots of messages ending up in your DLQ, but see no errors in your logs, this is likely your culprit.
It seems crazy that it works this way, and seemingly has for years. AWS's internal code is doing the wrong thing, and wasting developer hours across the globe. Ethically, there's also the question of whether you're getting billed for all of those erroneous message receives. But I'm mostly worried about having a bad system that is a pain in the ass to detect to work around.
Me, minutes before a meeting: Just one more line. One more line of code.
(15 minutes later, seeing a clock): Dangit, I'm late for my meeting.
Me: "Why do I put the cap back on my water bottle after every sip? This is annoying even to myself."
Also me: Knocks over the full bottle I just minutes before had placed between me and my keyboard and yet had somehow forgotten existed.
(Thankfully, the cap was on! 😆)
For a while I'd been maintaining 2 versions of the FeoBlog TypeScript client:
But maintaining two codebases is not a great use of time. So now the Deno codebase is the canonical one, and I use DNT to translate that into a node module, which I then import into the FeoBlog UI, which you are probably using right now to read this post. :)
Is it weird that I'm starting to feel like having a phone number is not worth it?
First, I use actual phone conversations VERY rarely. If I'm home and want to have a voice conversation with someone, I usually use VoIP (usually: FaceTime Audio) because it has higher quality than cell phone calls. If I'm out and about and want to communicate meeting time/place with someone, I'm going to send (or expect) a text message. So there's the question about whether it's worthwhile continuing to pay for a service that I don't use.
But the real problem is that modern apps and online services use your phone number as if it's a unique ID. If you give some organization your phone number, they'll definitely use it to uniquely identify you. Possibly to third parties.
And, even if you don't give them your phone number directly, since apps can slurp your contact info from any of your friends' contact lists, they've still got it.
And if companies can store this data about you, that data can get hacked and leaked. "HaveIBeenPwned" recently added phone numbers to their search because it's become such a concern.
If you worry about giving out your Social Security number, you should probably worry just as much about giving out your phone number. To companies or your friends.
This doesn't even touch on the problem of spam/phishing/fraudulent calls, which is another real problem w/ the phone system.
So, despite having the same phone number since 1998, I'd love to get rid of mine. Unfortunately, I can't yet because so many systems (ex: banks, messaging apps) do use it to identify you.
Plus, imagine you give up (or just change) your phone number. Now your old number is available for re-use. If someone were to claim it, they could then use it to impersonate you on any systems that haven't been updated with your new (lack of) phone number.
I’m thankful for when the cat comes and gets me to come to bed, as if to say: “uh? Hey. I’m sleepy and I need some warm legs to curl up on. Can you get in bed already?” ❤️
So recently Elon has:
It sure is starting to seem like he paid a lot of money to delegitimize it as a communication platform.
Guess you can't get "cancelled" if people and bots are indistinguishable.
The weather finally got decently cold and we turned on the heat in the new house. Woke up at 3:15am broiling in my own bed. It turns out the previous owner had programmed the thermostat to go up to 75°F at some point in the night.
75!? I barely let the house get that warm during the summer! So I’m currently in the living room with the sliding door to the back patio cracked so I can cool off. 🥵
Am I weird in disliking inlay hints?
They're those little notes that your IDE can add to your code to show you what types things are, but they're not actually part of your source code. For an example, see TypeScript v4.4's documentation for inlay hints.
My opinion is that:
As an example, take this code:
function main() {
console.log(foo("Hello", "world"))
}
// Imagine this function is in some other file, so it's not on the same screen.
function foo(target: string, greeting: string) {
return `${greeting}, ${target}!`
}
If you're looking at just the call site, there's a non-obvious bug here because the foo()
function takes two arguments of the same type, and the author of main()
passed them to foo()
in the wrong order.
Inlay hints propose to help with the issue by showing you function parameter names inline at your call site, like this:
function main() {
console.log(foo(target: "Hello", greeting: "world"))
}
(target:
and greeting:
are added, and somehow highlighted to indicate that they're not code.)
Now it's more clear that you've got the arguments in the wrong order. But only if you're looking at the code in an IDE that's providing those inlay hints. If you're looking at just the raw source code (say, while doing code review, or spelunking through Git history), you don't see those hints. The developer is relying on extra features to make only their own workflow easier.
Without inlay hints, it's a bit more obvious that, hey, the ordering here can be ambiguous, I should make that more clear. Maybe we should make foo()
more user-friendly?
Lots of languages support named parameters for this reason. TypeScript/JavaScript don't have named parameters directly, but often end up approximating them with object passing:
function foo({target, greeting}: FooArgs) {
return `${greeting}, ${target}!`
}
interface FooArgs {
target: string
greeting: string
}
Now the call site is unambiguous without inlay hints:
foo({greeting: "Hello", target: "world"})
And, even better, our arguments can be in whatever order we want. (This syntax is even nicer in languages like Python or Kotlin that have built-in support for named parameters.)
The prime use of these kinds of hints is when you're forced to use some library that you didn't write that has a poor API. But IMO you're probably still better off writing your own shim that uses better types and/or named parameters to operate with that library, to save yourself the continued headache of dealing with it. Inline hints just let you pretend it's not a problem for just long enough to pass the buck to the next developers that have to read/modify the code.
I have a desktop gaming machine that runs Windows 11. It's not bad at games but it's so slow at things like, opening apps, opening settings, etc.
Is Windows 11 just this slow, or is something wrong?
It's so bad that I ran winsat formal
to see if my nvme "hard drive" was somehow misconfigured:
Results:
> Run Time 00:00:00.00
> Run Time 00:00:00.00
> CPU LZW Compression 1139.80 MB/s
> CPU AES256 Encryption 15057.26 MB/s
> CPU Vista Compression 2834.34 MB/s
> CPU SHA1 Hash 10656.56 MB/s
> Uniproc CPU LZW Compression 100.19 MB/s
> Uniproc CPU AES256 Encryption 986.78 MB/s
> Uniproc CPU Vista Compression 250.19 MB/s
> Uniproc CPU SHA1 Hash 774.01 MB/s
> Memory Performance 29614.11 MB/s
> Direct3D Batch Performance 42.00 F/s
> Direct3D Alpha Blend Performance 42.00 F/s
> Direct3D ALU Performance 42.00 F/s
> Direct3D Texture Load Performance 42.00 F/s
> Direct3D Batch Performance 42.00 F/s
> Direct3D Alpha Blend Performance 42.00 F/s
> Direct3D ALU Performance 42.00 F/s
> Direct3D Texture Load Performance 42.00 F/s
> Direct3D Geometry Performance 42.00 F/s
> Direct3D Geometry Performance 42.00 F/s
> Direct3D Constant Buffer Performance 42.00 F/s
> Video Memory Throughput 279385.00 MB/s
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Sequential 64.0 Read 4159.90 MB/s 9.5
> Disk Random 16.0 Read 1007.15 MB/s 8.8
> Total Run Time 00:00:11.67
When it can read a Gagabyte per second when doing random access, I don't think the disk is the problem. The CPU is an "AMD Ryzen 7 3700X 8-Core Processor" at 3.59 GHz, which also shouldn't be a problem.
Anybody have tips beyond "LOL don't run Windows"?