die irrtümer des arztes sind mit erde bedeckt

When booting via PXE, Linux usually uses NFS or NBD to transmit its root file system. After that, a SquashFS or UnionFS (or something similar) is applied. There is no problem with this, but it can be annoying to synchronize the parts being in the TFTP-tree with an NFS-Tree, or to maintain an iso-file. In either case, you have a second fileserver. It would be nice if one could also provide the root file system via TFTP.

I see four problems with this approach.

There are no directory listings in TFTP. A conservative extension to the protocol would be to define a file, say directory/*dirlist*, containing the directory index. With a simple script running over the published directories and creating that file in every of them, one would not even need to adapt the TFTP server, as the protocol could stay the same. Another possibility would be to define an additional RFC 2348 compliant option "dirlist=true" in the request-package. However, this would need a protocol extension.

There is no meta information for files in TFTP. A conservative extension to the protocol would be to add them to the directory list mentioned above. This would also be extensible. Obviously, this could also be done with a RFC 2348 compliant option, say "file-meta", in the request-package.

Files can only be 32 megabytes large. This is due to the fact that RFC 1350 specifies that 512 byte blocks are enumerated by short integers. RFC 2347 defines an extension to extend the block size, but still, there is an upper bound by the package size of UDP packages, so this is no real solution. A conservative extension would be to split the files into smaller parts with a defined naming scheme. By just running a script over the directories which splits all large files, one would not even need a protocol extension for this, and old TFTP servers could still be used. Another possibility would be a RFC 2348 compliant option defining a larger block index, but this would be a protocol extension which is not backward compatible.

Files are always sent on the whole. Especially when publishing block devices, this is a problem. One could split them into smaller files as proposed above, but this is not really convenient. I cannot think of any other conservative standard for this. Another possibility would be a RFC 2348 compliant range-option which gives the byte-range to be read, this would be a protocol extension, but it would also solve the problem of files being larger than 32 megabytes.

There may be some problems, but I think the setup could become much easier.