He really doesn’t understand that tariffs are not paid by the exporting country.
He really doesn’t understand that tariffs are not paid by the exporting country.
That particular line looks like application of some rule or substitution of a term - on it’s own, simple
Yes, it is a huge pain, especially if you want to have round-trip interoperability with humans using markup. Wikipedia had a major challenge with this when they decided to add a rich text editor alongside wiki markup.
Calling people “resources” and the mindset that delivery teams are just a number that you can spend money to increase is a mark of poor project and personnel management, as well.
Why should no one be touching it? You’re basically forcing manually communicated sync/check points on a system that was designed to ameliorate those bottlenecks
If “we work in a way that only one person can commit to a feature”, you may be missing the point of collaborative distributed development.
Never use rebase for any branch that has left your machine (been pushed) and which another entity may have a local copy of (especially if that entity may have committed edits to it).
Not just calls to self - any time a function’s last operation is to call another function and return its result (a tail call), tail call elimination can convert it to a goto/jump.
If your company is using story points to “measure” developers, they are completely misusing that concept, and it probably results in a low-teamwork environment (as you describe).
The purpose of story points is so a team can say “we’re not taking more than X work for the next two weeks. Make sure it’s the important stuff.” It is a way to communicate a limit to force prioritization by the product owner.
And, in fact, data shows that point estimation so poorly converges on reality that teams may as well assign everything a “1”. The key technique is to try to make stories the same size, and to reduce variability by having the team swarm/mob to unblock stuck work.
Who creates these tasks? They need to close the year old items, reevaluate the work and break it down into sub-5-day chunks. If there are so many unknowns that it’s impossible to do that, the team needs to brainstorm how to resolve them.
Have you tried Scala.js ?
The guy writing didn’t know about generics, so…. (to be clear, I trust the rust core devs know about these things, I’m implying that they are afraid to reference them directly because they think the M word will scare people)
They seem quite afraid of mentioning the basis for effects that Haskell, Scala et al have used for years - Monads.
So they’re going to re-invent it all 🙃
FYI, banks do run exactly this type of analysis inside their own system to get around not being able to share your account activity.
By definition, the 1% are the top 1% of earners in the population, so there are 8,100,000,000 * 0.01 = 81,000,000 of them, not 400.
The richest 400 are the 0.000005%.
The 1% line in the US is at $819k a year, and $60k worldwide.
I ran BeOS for a bit, and a fun feature was its CPU activity monitor app, which let you click a processor to remove or re-add it to the scheduler. It would let you turn off the last remaining processor, no problem - and instantly freeze the system.
Betavoltaics have been around for 40+ years. https://en.m.wikipedia.org/wiki/Betavoltaic_device
This device generates microwatts. You’d need thousands of them in parallel to power a typical mobile phone.
All of their tech job openings are in India or Vietnam. I’d have assumed a major US health data handler would be developing onshore!
For most organizations, the cost of paying programmers far exceeds the cost of CPU time; benchmarks really should include how long the solution took to envision/implement and how many follow up commits were required to tune it.
That’s only true in crappy languages that have no concept of async workflows, monads, effects systems, etc.
Sad to see that an intentionally weak/limited language like Go is now the counterargument for good modeling of errors.