

Bit of a left field suggestion but one thing that really helps is finding your people.
In my younger years I sometimes really struggled with casual conversation, I often felt like I was the weird guy who had nothing to say.
It turned out that was only really true when I was spending a lot of time with people with whom I had very little in common. As I got older I eventually found “my people”. Friends who I click with, who I share values and interests with, who communicate similarly to me.
It’s not about finding people who are just copies of you, that would be pretty boring and make for a real social echo chamber. You want a range of friends with different interests, from different walks of life. But you want them to be, for lack of a better term “compatible” with you.
If you happen to be neurodivergent then that adds a whooooole extra layer of complexity to conversational compatibility. There’s a stereotype that autistic people are awkward or socially inept, which is complete rubbish. They just communicate differently to neurotypicals. Put a bunch of similar autistic people in a room together and watch them have no trouble at all making conversation with each other, in their own style.
Anyway, maybe this isn’t relevant to you, and you’re already happy with the people in your life. But it’s worth taking the time to examine whether the reason you struggle to make conversation is because you’re trying to make it with the wrong people.
There’s more to it than that. Firstly, at a theoretical level you dealing with the concepts of entropy and information density. A given file has a certain level of information in it. Compressing it is sort of like distilling the file down to its purest form. Once you reached that point, there’s nothing left to “boil away” without losing information.
Secondly, from a more practical point of view, compression algorithms are designed to work nicely with “normal” real world data. For example as a programmer you might notice that your data often contains repeated digits. So say you have this data: “11188885555555”. That’s easy to compress by describing the runs. There are three 1s, four 8s, and seven 5s. So we can compress it to this: “314875”. This is called “Run Length Encoding” and it just compressed our data by more than half!
But look what happens if we try to apply the same compression to our already compressed data. There are no repeated digits, there’s just one 3, then one 1, and so on: “131114181715”. It doubled the size of our data, almost back to the original size.
This is a contrived example but it illustrates the point. If you apply an algorithm to data that it wasn’t designed for, it will perform badly.