• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: December 13th, 2024

help-circle
  • Large language models can generate defensive code, but if you’ve never written defensively yourself and you learn to program primarily with AI assistance, your software will probably remain fragile.

    This is the thesis of this argument, and it’s completely unfounded. “AI can’t create antifragile code” Why not? Effective tests and debug time checks, at this point, come straight from claude without me even prompting for it. Even if you are rolling the code yourself, you can use AI to throw a hundred prompts at it asking “does this make sense? are there any flaws here? what remains untested or out of scope that I’m not considering?” like a juiced up static analyzer


  • The reasoning in question:

    This game teaches - by way of images, information and gameplay - skills and knowledge that are used in poker. During gameplay, the player is rewarded with ‘chips’ for playing certain hands. The player is able to access a list of poker hand names. As the player hovers over these poker hands, the game explains what types of cards the player would need in order to play certain hands. As the game goes on, the player becomes increasingly familiar with which hands would earn more points. Because these are hands that exist in the real world, this knowledge and skill could be transferred to a real-life game of poker

    So, this game teaches skills and knowledge that are used in poker. The skills in knowledge are limited to… playing and making poker hands. That’s it. Also, “as the game goes on, the player becomes increasingly familiar with which hands would earn more points” – is hilariously funny. The idea that knowledge of what a poker hand is is anything related to the dangers of gambling is ridiculous.