People tend to be skeptical that the 17 * 65536 = 1,114,112 character codes provided by Unicode will be big enough. After all, we have moved from 8-bit to 64-bit computers, both in word size and in address size; in general, most finite limits have been repeatedly shown to be insufficient. The maximum normal memory on MS-DOS-based PCs was 640K, ten times as big as the 64K limit on the 8-bit systems that preceded them: after all, as Bill Gates supposedly said back in 1981, 640K of memory ought to be enough for anybody!
In fact, though, there just aren't any huge and complicated writing systems hiding in some remote ravine. We have a pretty good map of all the writing systems on the planet; a few may have been overlooked by accident, but none of them are going to be huge. The biggest remaining ones are Egyptian hieroglyphics and ancient Chinese characters, and neither of them will require anything like a million character codes.
There are other ceilings in computing that aren't likely to be broken through either. Consider the number of different assembly-language op codes. Does anyone foresee computer chips with 65,536 different opcodes? How about 4,294,967,296 distinct opcodes? I don't think so.
Or consider IP version 6 network addresses. There are 2128 = 340,