Rendered at 22:57:33 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
big_toast 6 hours ago [-]
URL data sites are always very cool to me. The offline service worker part is great.
The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.
I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.
Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?
You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).
I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.
big_toast 2 hours ago [-]
No, I don't have any good ideas. Just hoping someone else does, or that I'm missing something.
I think it's in the hands of browser vendors.
The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.
FailMore 2 hours ago [-]
Thanks. The question has made me wonder about the value of some sort of real time verification service.
pdyc 11 hours ago [-]
i also used fragment technique for sharing html snippets but url's became very long, i had to implement optional url shortener after users complained. Unfortunately that meant server interaction.
Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)
pdyc 10 hours ago [-]
i doubt it would be possible, it boils down to compression problem compressing x amount of content to y bits, since content is unpredictable it cannot be done without having intermediary to store it.
mystickphoenix 9 hours ago [-]
For this use-case, maybe compression and then encoding would get more data into the URL before you hit a limit (or before users complain)?
I.e. .md -> gzip -> base64
moaning 15 hours ago [-]
Markdown style editing looks very easy and convenient
FailMore 15 hours ago [-]
Thanks! One potential use case I have for it is being able to make "branded" markdown if you need to share something with a client/public facing.
stealthy_ 14 hours ago [-]
Nice, I've also built something like this we use internally. Will it reduce token consumption as well?
FailMore 14 hours ago [-]
Thanks. Re tokens reduction: not that I’m aware of. Would you mind explaining how it might? That could be a cool feature to add
moeadham 1 days ago [-]
I had not heard of url fragments before. Is there a size cap?
FailMore 1 days ago [-]
Ish, but the cap is the length of url that the browser can handle. For desktop chrome it's 2MB, but for mobile Safari its 80KB.
The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.
vivid242 12 hours ago [-]
Hadn’t heard of it either - very smart, could open lots of other privacy-friendliness-improved „client-based web“ apps
FailMore 12 hours ago [-]
TYVM. Yeah, I am curious to explore moving into other file formats like CSVs.
The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.
I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.
Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?
[1]:https://sdocs.dev/analytics
You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).
I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.
I think it's in the hands of browser vendors.
The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.
https://easyanalytica.com/tools/html-playground/
Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)
I.e. .md -> gzip -> base64
The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.