• 2 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: November 15th, 2023

help-circle
  • The biggest items on the graph are all out of bounds accesses, use-after-free and overflows. It is undeniable that memory safe languages help reducing vulnerabilities, we know for decades that memory corruption vulnerabilities are both the most common and the most severe in programs written in memory-unsafe languages.

    Unsafe rust is also not turning off every safety feature, and it’s much better to have clear highlighted and isolated parts of code that are unsafe, which can be more easily reviewed and tested, compared to everything suffering from those problems.

    I don’t think there is debate here, rewriting is a huge effort, but the fact that using C is prone to memory corruption vulnerabilities and memory-safe languages are better from that regard is a fact.




  • With Simplelogin integration Proton does PGP encryption because effectively all emails are forwarded by a simplelogin address. I have just tested to be sure, and I can confirm it is the case. I agree though that this only protects “my side”, which is why I said that it doesn’t provide all the PGP features.

    Publishing your PGP public key next to your email doesn’t require “wasting a domain” or anything like that

    It does if I don’t have any key that I use for emails. My key(s) is bound to the Proton account with the other domains I use, so for this domain I would need to either add it (back) to Proton (easier option, but “wastes” a domain) or just generate and manage a key myself, that I can then even add manually to Proton, but I didn’t bother doing this just yet. I am not going to use any other public key I have because I wanted specifically to keep this domain separated from my identity.

    I just thought it was amusing that you didn’t seem to actually follow your own advice.

    FWIW, I do follow the described setup for everything personal, which is what matters to me. As I said, ~1/2 months ago I did have my PGP key because I enrolled the domain into Proton, which if anything is a testament to how annoying it is having to manage keys myself (which I already do for signing commits etc.). Maybe I will spend some time to polish the setup, eventually.



  • Yep, I am aware of the contradiction. I used to, but since then I moved to an alias as it was not worth wasting a domain for a single address. I may spend eventually the time to setup PGP for the alias itself, but I just didn’t. It’s a Proton alias, so I get anyway PGP encryption, though (obviously without all the features, but good enough for the near-zero volume I currently have).


  • Not that I know, which is the reason why I essentially didn’t consider those threats relevant for my personal threat model. However, it’s also possible it happened and it was never discovered. The point is that there are risks associated with having the same provider having access to both the emails (and the operations around them) and the keys/crypto operations.

    The cost of stealthily compromising a secure email company is simply disproportionate compared to the gain from accessing my emails. Likewise, it’s unrealistic to think some sophisticated attacker would target me specifically to the point that they will discover and then compromise the specific tooling I am using to access/encrypt/decrypt emails. Also, a $5 wrench could probably achieve the same goal in a quicker and cheaper way.

    If I were a Snowden-level person, I would probably consider that though, as it’s possible that the US government would try to coerce -say- Proton in serving bad JS code to user X. For most people I argue these are theoretical attacks that do not pose concrete risk.


  • Thanks, I will go and double check, I am sure there are more typos!

    I honestly didn’t think at all about the use of checkmarks/crosses and the fact that it can be misinterpreted, I will add a disclaimer.

    A bigger issue IMO is how you describe email encryption in transit as a matter of fact, but according to Google transparency report[1] there are still domains that do not support in transit encryption, and, what’s worse, when you send an email you can’t tell if it will be encrypted or not.

    you are right. The reason why I took that for granted is because I assumed the scenario in which people use the “mainstream” providers. I was looking at data and I think Outlook and Gmail alone make up more than 50% of the market share. I made an assumption which I considered fair, as 99%+ of the users do not need to worry about this at all. However, this is interesting data and I might add a note about it as well, so thanks!


  • Thanks!

    Can you make the images clickable? They’re impossible to read at that size.

    I will look into it, there might be a zola option for it. If there is, sure!

    This paragraph should probably mention that this won’t work if the provider uses E2EE

    That paragraph is in the context of what I call “transparent encryption”, which means E2EE works until the provider is not compromised and the E2EE is effectively broken by delivering malicious software or disclosing the key. E2EE is as resilient as the security of the provider, which is why picking a trusted one is important. Of course, compromising the provider and breaking the E2EE is quite complex.




  • I think the benefits of federation is discoverability. I can spin up my gitea or forgejo (or something else!) Instance, but when people look for code in their instances, they can still discover my public repositories, and if they want to contribute, they can fork and open PRs from their instances.

    So yeah, it means mostly you can selfhost and provide space to others, but with the same benefits that right now github offers (I.e., everything is there).




  • In most cases! Sorry, I simply don’t believe it. Once you operate for 5, 10, 20 years not having capitalized anything is expensive as hell, even without the skill issue (which is not a great argument, as it is the case for almost anything).

    It’s almost always the case with rent vs invest.

    Do you have some numbers?

    I cite a couple of articles in the post, and here is a nice list of companies and orgs that run outside the Cloud (it’s a bit old!) or decided to move away. Many big companies with their own DC, which is not surprising, but also smaller (Wikipedia!).

    37signals also showed a huge amount of savings (it’s one of the two links in the post) moving away from the cloud. Do you have any similar data that shows the opposite (like we saved X after going cloud)? I am genuinely curious

    Edit: here is another one https://tech.ahrefs.com/how-ahrefs-saved-us-400m-in-3-years-by-not-going-to-the-cloud-8939dd930af8 Looking solely at the compute resources, there was an order of magnitude of difference between cloud costs and hosting costs (x11). Basically a value comparable (in reality double) to the whole revenue of the company.



  • Redundancy should be automatic. Raid5 for instance.

    Yeah it should, but something needs to implement that. I mean, when distributed systems work redundancy is automatic, but they can also fail. We are talking about redundancy implemented via software, and software has bugs, always. I am not saying that it can’t be achieved, of course it can, but it has a cost.

    You can have an oracle (or postgres, or mongo) DB with multi region redundancy, encryption and backups with a click.

    I know, and if you don’t understand all that complexity you can still fuckup your postgres DB in a disastrous way. That’s the whole point of this thread. Also operators can do the same for you nowadays, but again, you need to know your systems.

    Much, much simpler for a sysadmin (or an architect) than setting the simplest mysql on a VM.

    Of course it is. You are paying someone else for that job. Not going to argue with that. In fact, that’s what makes it boring (which I talked about in the post).

    Unless you’re in the business of configuring databases, your developers should focus on writing insurance risk code, or telco optimization, or whatever brings money.

    This is a modern dogma that I simply disagree with. Building an infrastructure tailored around your needs (i.e., with all you need and nothing else) and cost effective does bring money, it does by saving costs and avoiding to spend an enormous amount of resources into renting all of that, forever, scaling with your business.

    You can build a redundant system in a day like Legos, much better security and higher availability (hell, higher SLAs even) than anything a team of 5 can build in a week self-manging everything.

    This is the marketing pitch. The reality is that companies still have huge teams, still have tons of incidents, still take long to deliver projects, still have security breaches, but they are spending 3, 5, 10 times as much and nothing of those money is capitalized.

    I guess we fundamentally disagree, I envy you for what positive experiences you must have had!


  • Instant transactions are periodic, I don’t know any bank that runs them globally on one machine to compensate for time zones.

    Ofc they don’t run them on one machine. I know that UK banks have only DCs in UK. Also, the daily pattern is almost identical everyday. You spec to handle the peaks, and you are good. Even if you systems are at 20% half the day everyday, you are still saving tons of money.

    Batches happen at a fixed time, are idle most of the day.

    Between banks, from customer to bank they are not. Also now most circuits are going toward instant payments, so the payments are settled more frequently between banks.

    My experience are banks (including UK) that are modernizing, and cloud for most apps brings brutal savings if done right, or moderate savings if getting better HA/RTO.

    I want to see this happening. I work for one and I see how our company is literally bleeding from cloud costs.

    But that should have been a lambda function that would cost 5 bucks a day tops

    One of the most expensive product, for high loads at least. Plus you need to sign things with HSMs etc., and you want a secure environment, perhaps. So I would say…it depends.

    Obviously I agree with you, you need to design rationally and not just make a dummy translation of the architecture, but you are paying for someone else to do the work + the service, cloud is going to help to delegate some responsibilities, but it can’t be cheaper, especially in the long run since you are not capitalizing anything.



  • Systems are always overspecced, obviously. Many companies in those industries are dynosaurs which run on very outdated systems (like banks) after all, and they all existed before Cloud was a thing.

    I also can’t talk for other industries, but I work in fintech and banks have a very predictable load, to the point that their numbers are almost fixed (and I am talking about UK big banks, not small ones).

    I imagine retail and automotive are similar, they have so much data that their average load is almost 100% precise, which allows for good capacity planning, and their audience is so wide that it’s very unlikely to have global spikes.

    Industries that have variable load are those who do CPU intensive (or memory) tasks and have very variable customers: media (streaming), AI (training), etc.

    I also worked in the gaming industry, and while there are huge peaks, the jobs are not so resource intensive to need anything else than a good capacity planning.

    I assume however everybody has their own experiences, so I am not aiming to convince you or anything.


  • I am specifically saying that redundancy doesn’t solve everything magically. Redundancy means coordination, more things that can also fail. A redundant system needs more care, more maintenance, more skills, more cost. If a company decides to use something more sophisticated without the corresponding effort, it’s making things worse. If a company with a 10 people department thinks that using Cloud it can have a resilient system like it could with 40 people building it, they are wrong, because they now have a system way more complex that they can handle, despite the fact that storage is replicated easily by clicking in the GUI.