You can use S3 versioning, assuming you have enabled this on the bucket. It would be a little clunky. It would also be done in batches and not continuous append.
Basically if your data is append only (such as a log), buffer whatever reasonable amount is needed, and then put a new version of the file with said data (recording the generated version ID AWS gives you). This gets added to the "stack" of versions of said S3 object. To read them all, you basically get each version from oldest to newest and concatenate them together on the application side.
Tracking versions would need to be done application side overall.
You could also do "random" byte ranges if you track the versioning and your object has the range embedded somewhere in it. You'd still need to read everything to find what is the most up to date as some byte ranges would overwrite others.
Definitely not the most efficient but it is doable.
You can set up lifecycle policies. For example, auto delete or auto archive versions > X date. That’s one lifecycle rule. With custom naming schemes, it wouldn’t scale as well.
Basically if your data is append only (such as a log), buffer whatever reasonable amount is needed, and then put a new version of the file with said data (recording the generated version ID AWS gives you). This gets added to the "stack" of versions of said S3 object. To read them all, you basically get each version from oldest to newest and concatenate them together on the application side.
Tracking versions would need to be done application side overall.
You could also do "random" byte ranges if you track the versioning and your object has the range embedded somewhere in it. You'd still need to read everything to find what is the most up to date as some byte ranges would overwrite others.
Definitely not the most efficient but it is doable.