The way I understand this, the issue is that without reading it they cannot verify that it doesn’t contain sensitive information, so they can’t give it out. That sounds like a reasonable explanation to me.
The way I understand this, the issue is that without reading it they cannot verify that it doesn’t contain sensitive information, so they can’t give it out. That sounds like a reasonable explanation to me.
The issue with online voting, no matter what you do, is that someone can force you under threat of violence to vote for a specific candidate, and watch to make sure you do it. Complete privacy in the voting booth is paramount to ensuring that everyone can vote freely.
Software is a tool. I develop stuff that i know is of interest to companies working with everything from nuclear energy to hydrogen electrolysis and CO2 storage. I honestly believe I can make a positive contribution to the world by releasing that software under a permissive licence such that companies can freely integrate it into their proprietary production code.
I’m also very aware that the exact same software is of interest to the petroleum industry and weapons manufacturers, and that I enable them by releasing it under a permissive licence.
The way I see it, withholding a tool that can help do a lot of good because it can also be used for bad things just doesn’t make much sense. If everybody thinks that way, how can we have positive progress? I don’t think I can think of any more or less fundamental technology that can’t be used for both. The same chemical process that has saved millions from starvation by introducing synthetic fertiliser has taken millions of lives by creating more and better explosives. If you ask those that were bombed, they would probably say they wish it was never invented, while if you ask those that were saved from the brink of starvation they likely praise the heavens for the technology. Today, that same chemical process is a promising candidate for developing zero-emission shipping.
I guess my point is this: For any sufficiently fundamental technology, it is impossible to foresee the uses it may have in the future. Withholding it because it may cause bad stuff is just holding technological development back, lively preventing just as much good as bad. I choose to focus on the positive impact my work can have.
There’s evidence that knights would dismount before battle to prevent their horse from being injured, even though they knew they were exposing themselves to greater risk. Although we have more technical knowledge about how to “optimally” care for horses now, there’s no reason to believe that we aren’t as or more exploitative of them, rendering them as or less healthy than horses back then.
Oh, I definitely get that the major appeal of excel is a close to non-existent barrier to entry. I mean, an elementary school kid can learn the basics(1) of using excel within a day. And yes, there are definitely programs out there that have excel as their only interface :/ I was really referring to the case where you have the option to do something “from scratch”, i.e. not relying on previously developed programs in the excel sheet.
(1) I’m aware that you can do complex stuff in excel, the point is that the barrier to entry is ridiculously low, which is a compliment.
I just cannot imagine any task you can do in excel that isn’t easier to do with Python/Pandas. The simplest manipulations of an excel sheet pretty much require you to chain an ungodly list of arcane commands that are completely unreadable, and god forbid you need to work with data from several workbooks at the same time…
You are neglecting the cost-benefit of temporarily jumping to the wrong conclusion while waiting for more conclusive evidence though. Not doing anything because evidence that this is bad is too thin, and being wrong, can have severe long-term consequences. Restricting tiktok and later finding out that it has no detrimental effects has essentially zero negative consequences. We have a word for this principle in my native language - that if you are in doubt about whether something can have severe negative consequences, you are cautious about it until you can conclude with relative certainty that it is safe, rather than the other way around, which would be what you are suggesting: Treating something as safe until you have conclusive evidence that it is not, at which point a lot of damage may already be done.
So what you’re saying is: We have a small sample of unreliable evidence that this thing may be absolutely detrimental to the developing brain. Thus, we should assume it’s fine until we have more reliable evidence. Did I get that right?
I am very fond of the idea of “stateless” code, which may seem strange coming from a person that likes OOP. When I say “stateless”, I am really referring to the fact that no class method should ever have any side-effect. Either it is an explicit set
method, or it shouldn’t affect the output from other methods of the object. Objects should be used as convenient ways of storing/manipulating data in predictable/readable ways.
I’ve seen way too much code where a class has methods which will only work"as expected" if certain other methods have been called first.
Sounds reasonable to me: With what I’ve written I don’t think I’ve ever been in a situation like the one you describe, with an algorithm split over several classes. I feel like a major point of OOP is that I can package the data and the methods that operate on it, in a single encapsulated package.
Whenever I’ve written in C, I’ve just ended up passing a bunch of structs and function pointers around, basically ending up doing “C with classes” all over again…
I would argue that there are very definitely cases where operator overloading can make code more clear: Specifically when you are working with some custom data type for which different mathematical operations are well defined.
This makes sense to me, thanks! I primarily use Python, C++ and some Fortran, so my typical programs / libraries aren’t really “pure” OOP in that sense.
What I write is mostly various mathematical models, so as a rule of thumb, I’ll write a class to represent some model, which holds the model parameters and methods to operate on them. If I write generic functions (root solver, integration algorithm, etc.) those won’t be classes, because why would they be?
It sounds to me like the issue here arises more from an “everything is a nail” type of problem than anything else.
Oh, thanks then! I’ve heard people shred on OOP regularly, saying that it’s full of foot-canons, and while I’ve never understood where they’re coming from, I definitely agree that there are tasks that are best solved with a functional approach.
Can someone please enlighten me on what makes inheritance, polymorphism, an operator overloading so bad? I use the all regularly, and have yet to experience the foot cannons I have heard so much about.
I would say his mouth-region is neutral, but the region around his eyes are focused, almost to the point of being intimidating.
I’ve never thought of myself as a conspiracy theorist, but if jar-jar being planned to be the actual phantom menace, but later being taken out of the role because fans hated him counts as a conspiracy theory: Count me in! I think the arguments are compelling to say the least.
I’ve found chatgpt reasonably good for one thing: Generating regex-patterns. I don’t know regex for shit, but if I ask for a pattern described with words, I get a working pattern 9/10 times. It’s also a very easy use-case to double check.
I definitely roll with “badass tiny mf”, “chill little dude”, “tiny gangsta bro” or any other title making fun of my stature. Call me anything involving “king” and I’ll be inclined to convince you that, even though I’m short, you’ll be shorter once you’re confined to a wheelchair
I was thinking something similar: If you have the computer write in a formal language, designed in such a way that it is impossible to make an incorrect statement, I guess it could be possible to get somewhere with this
GET THE ROUNDIE!