We all migrate to smaller websites try not to post outside drawing attention just to hide from the “Ai” crawlers. The internet seems dead except for the few pockets we each know existed away from the clankers
We all migrate to smaller websites try not to post outside drawing attention just to hide from the “Ai” crawlers. The internet seems dead except for the few pockets we each know existed away from the clankers
Do you know how they find it? Is it just random input of address over and over?
Almost certainly. There are only 4,294,967,296 possible IPv4 addresses, i.e. 4.3ish billion, which sounds like a lot but in computer terms really isn’t. You can scan them in parallel, and if you’re an advanced script kiddie you could even exclude ranges that you know belong to unexciting organizations like Google and Microsoft, which are probably not worth spending your time messing with.
If you had a botnet of 8,000 or so devices and employed a probably unrealistically generous timeout of 15 seconds, i.e. four attempts per minute per device, you could scan the entire IPv4 range in just a hair over 93 days and that’s before excluding any known pointless address blocks. If you only spent a second on each ping you could do it in about six days.
For the sake of argument, cybercriminals are already operating botnets with upwards of 100,000 compromised machines doing their bidding. That bidding could well be (and probably is) probing random web servers for vulnerabilities. The largest confirmed botnet was the 911 S5 which contained about 19 million devices.
That’s amazing and scary at the same time. Thanks for putting it into perspective!
If it’s https it’s discoverable by hostname.
https://0xffsec.com/handbook/information-gathering/subdomain-enumeration/#certificate-transparency
I don’t know exactly how they do it, but probing every ipv4 address isn’t that hard
But there can be multiple websites behind one IP address?! They would not show when onhy accessing the IP. They would need to know about the domains somehow.