Generally, the data structures used to represent the files stored on a disc are limited. The easiest way to design a quick-to-use file system is to assign low-memory structures for data storage. The inodes are pointed to one of these structures. Inodes are structures themselves that contain system file information.
This limitation is based on how many bytes you assign to your inodes structure. You have a global limit to how much files are stored in the filesystem if you need 16 bytes to point to an inode on the disk and you choose to set 256 M for this pointer structure. You have an additional 32-bit pointer and 1 G limit.
This model is the scale with system resources, so modern filesystems can support absurd boundaries-ZFS-Wikipedia more easily. Also, some Filesystems such as ext4 support more files for larger file systems, and keep metadata scales up to the maximum “bitness” of the file system supported by the disk space “burned.” Ext4 uses 32 bit so the maximum is defined. You can use less disk space to store inodes/pointers, thus reducing your efficient max count of files.
There are other ways of implementing filesystems with these limitations, but there are downsides to their implementation. You might, for example, design a filesystem without a global limit but must pass through the structure of the directory for each block search. Your subdirectories would actually have the threshold. If you have subdirectories, you will. I don’t know about filesystems for general purposes, however.
It’s ZFS, as far as the general purpose of wildlife is concerned.