

deleted by creator


deleted by creator


If you already know some programming languages, look for some kind of GUI or game library for it to see if you can use it. If not, something like Blender might be easiest to make in C++, Rust, C (if you’re a masochist), or maybe Zig. This may also influence the shading language you choose. Start with this.
You will need to know some shader language. You have a few options there, but the most popular are GLSL and OpenGL (though I’d prefer GLSL). There’s also WGSL and some others, but they aren’t as popular. Prefer whatever the graphics library you’re using wants you to use.
Math is very heavy on linear algebra. Look up PBR if you want to render realistic 3d shapes. Google’s Filament is well documented and walks through implementing it yourself if you want, but it’s pretty advanced, so you might want to start simpler (fragment colors can just be base color * light color * light attenuation * (N*L) for example).


It’s one of the major supermarket chains in NL, which I guess isn’t that obvious to most people, but I miss shopping there because the chains where I am have rotten, moldy produce and AH always had fresh produce and packs of relatively cheap stroopwafel.
Also, related to the post, I’d almost rather be sweeping the floor there. I don’t want to sweep floors, but it’d mean I live there, so yeah.


The author seems to be more interested in generating outrage than anything, but I think the point about AI still stands. From a UX standpoint, key points that may be incorrect are a terrible idea. That they originally intended to force AI on the user, at least from how it seems, is problematic.
The author’s privacy and accessibility concerns seem artifical to me.


The feature was introduced as a way for users to get relevant information faster, by providing them with an image, the webpage title, and AI-generated key points.
The AI part was made optional. That doesn’t mean they didn’t try.


Zen figured out link previews without using AI and the solution is really as simple as it gets. Maybe stop trying to manufacture problems for AI to solve?


500°C would be way above the safe operating temps, but most likely yes.


Server memory is probably reusable, though likely to be either soldered and/or ECC modules. But a soldering iron and someone sufficiently smart can probably do it (if it isn’t directly usable).


My experience, having actually tried this on a huge codebase: my time was better spent looking at file names and reading source code myself to answer specific questions about the code.
Using it to read a single file or a few of them might go better. If you can find the right files first, you might get decent output.


Spouting bullshit? If so, I agree.
Codebases in the 100k+ to 1m+ sloc can be very difficult for a LLM (or human) to answer detailed questions about. A LLM might be able to point you in the right direction, but they don’t have enough context size to fit the code, let enough the capability to actually analyze it. Summarize? Sure, but it can only summarize what it has in context.


TLDR: data is something you collect over time from users, so you shouldn’t let the contracts for it mindlessly drift, or you might render old data unusable. Keeping those contracts in one place helps keep them organized.
But that explanation sucks if you’re actually five, so I asked ChatGPT to do that explanation for you since that would be hard for me to do:
Here’s a super-simple, “explain it like I’m 5” version of what that idea is trying to say:
🧠 Imagine your toys
You have a bunch of toys in your room — cars, blocks, stuffed animals.
Now imagine this:
You put some cars in the toybox.
You leave other cars on the floor in another place.
You keep some blocks in a bucket… and some blocks on the shelf.
And every time you want a toy, you have to run to a different spot to find its matching pieces.
That would be really confusing and hard to play with, right? Because things are spread out in too many places for no good reason.
🚧 What the blog is really warning about
In software (computer programs), “state” is like where toys are stored — it’s important information the program keeps track of. For example, it could be “what level I’m on in a game” or “what’s in my cart when I shop online.”
The article says the biggest mistake in software architecture is:
Moving that important stuff around too much or putting it in too many places when you don’t need to.
That makes the program really hard to understand and work with, just like your toys would be if they were scattered all over the place. (programming.dev)
🎯 Why that matters
If the important stuff is all over the place:
People get confused.
It’s harder to fix mistakes.
The program gets slower and more complicated for no reason.
So the lesson is:
👉 Keep the important information in simple, predictable places, and don’t spread it around unless you really need to. (programming.dev)


We’re postponing the announced billing change for self-hosted GitHub Actions to take time to re-evaluate our approach.


open to any feedback, always willing to learn
A common pattern with executable Python scripts is to:
!/usr/bin/env python3) to make it easier to execute__name__ == "__main__" before running any of the script so the functions can be imported into another script without running all the code at the bottom

Any website using CSR only can’t have a RCE because the code runs on the client. Any code capable of RSC that runs server and client side may be vulnerable.
From what I’ve seen, the exploit is a special request from a client that functionally lets you exec anything you want (via Function’s constructor). If your server is unpatched and recognizes the request, it may be (likely is) vulnerable.
I’m sure we’ll get more details over time and tools to manually check if a site is compromised.


I’m not the one recommending it lol.
If I had to guess, it’s to improve page performance by prerendering as much as possible, but I find it overkill and prefer to just prerender as much of the page as I can at build time and do CSR for the rest, though this doesn’t work if you have dynamic routes or some kind of server-side logic (good for blogs and such though).


I think their point was that CSR-only sites would be unaffected, which should be true. Exploiting it on a static site, for example, couldn’t be RCE because the untrusted code is only being executed on the client side (and therefore is not remote).
Now, most people use, or at least are recommended to use, SSR/RSC these days. Many frameworks make SSR enabled by default. But using raw React with no Next.js, react-router, etc. to create a client-side only site does likely protect you from this vulnerability.


30 is assuming you write code for all 30 days. In practice, it’s closer to 20, so 75 tests per day. It’s doable on some days for sure (if we include parameterized tests), but I don’t strictly write code everyday either.
Still, I agree with them that you generally want to write a lot of tests, but volume is less important than quality and thoroughness. The author using the volume alone as a meaningful metric is nonsense.
This is more likely the actual incident report:
A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
Edit: If you like reading


1500 tests is a lot. That doesn’t mean anything if the tests aren’t testing the right thing.
My experience was that it generates tests for the sake of generating them. Some are good. Many are useless. Without a good understanding of what it’s generating, you have no way of knowing which are good and which are useless.
It ended up being faster for me to just learn the testing libraries and write my own tests. That way I was sure every test served a purpose and tested the right thing.
There is some overlap (N-ish SFW) but not in the way they intended.