• 3 Posts
  • 162 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle
  • a better solution would be to add a method called something like ulock that does a combined lock and unwrap.

    That’s exactly what’s done above using an extension trait! You can mutex_val.ulock() with it!

    Now that I think about it, I don’t like how unwrap can signal either “I know this can’t fail”, “the possible error states are too rare to care about” or “I can’t be bothered with real error handing right now”.

    That’s why you’re told (clippy does that i think) to use expect instead, so you can signal “whatever string” you want to signal precisely.


    • C++ offers no guaranteed memory safety.
    • A fictional safe C++ that would inevitably break backwards compatibility might as well be called Noel++, because it’s not the same language anymore.
    • If that proposal ever gets implemented (it won’t), neither the promise of guaranteed memory safety will hold up, nor any big C++ project will adopt it. Big projects don’t adopt the (rollingly defined) so-called modern C++ already, and that is something that is a part of the language proper, standardized, and available via multiple implementations.

    would you argue that it’s impossible to write a"hello, world" program in C++

    bent as expected


    This proposal is just a part of a damage control campaign. No (supposedly doable) implementation will ever see the light of day. Ping me when this is proven wrong.







  • but futures only execute when polled.

    The most interesting part here is the polling only has to take place on the scope itself. That was actually what I wanted to check, but got distracted because all spawns are awaited in the scope in moro’s README example.

    async fn slp() {
        tokio::time::sleep(std::time::Duration::from_millis(1)).await
    }
    
    async fn _main() {
        let result_fut = moro::async_scope!(|scope| {
            dbg!("d1");
            scope.spawn(async { 
                dbg!("f1a");
                slp().await;
                slp().await;
                slp().await;
                dbg!("f1b");
            });
            dbg!("d2"); // 11
            scope.spawn(async {
                dbg!("f2a");
                slp().await;
                slp().await;
                dbg!("f2b");
            });
            dbg!("d3"); // 14
            scope.spawn(async {
                dbg!("f3a");
                slp().await;
                dbg!("f3b");
            });
            dbg!("d4");
            async { dbg!("b1"); } // never executes
        });
        slp().await;
        dbg!("o1");
        let _ = result_fut.await;
    }
    
    fn main() {
        let rt = tokio::runtime::Builder::new_multi_thread()
            .enable_all()
            .build()
            .unwrap();
        rt.block_on(_main())
    }
    
    [src/main.rs:32:5] "o1" = "o1"
    [src/main.rs:7:9] "d1" = "d1"
    [src/main.rs:15:9] "d2" = "d2"
    [src/main.rs:22:9] "d3" = "d3"
    [src/main.rs:28:9] "d4" = "d4"
    [src/main.rs:9:13] "f1a" = "f1a"
    [src/main.rs:17:13] "f2a" = "f2a"
    [src/main.rs:24:13] "f3a" = "f3a"
    [src/main.rs:26:13] "f3b" = "f3b"
    [src/main.rs:20:13] "f2b" = "f2b"
    [src/main.rs:13:13] "f1b" = "f1b"
    

    The non-awaited jobs are run concurrently as the moro docs say. But what if we immediately await f2?

    [src/main.rs:32:5] "o1" = "o1"
    [src/main.rs:7:9] "d1" = "d1"
    [src/main.rs:15:9] "d2" = "d2"
    [src/main.rs:9:13] "f1a" = "f1a"
    [src/main.rs:17:13] "f2a" = "f2a"
    [src/main.rs:20:13] "f2b" = "f2b"
    [src/main.rs:22:9] "d3" = "d3"
    [src/main.rs:28:9] "d4" = "d4"
    [src/main.rs:24:13] "f3a" = "f3a"
    [src/main.rs:13:13] "f1b" = "f1b"
    [src/main.rs:26:13] "f3b" = "f3b"
    

    f1 and f2 are run concurrently, f3 is run after f2 finishes, but doesn’t have to wait for f1 to finish, which is maybe obvious, but… (see below).

    So two things here:

    1. Re-using the spawn terminology here irks me for some reason. I don’t know what would be better though. Would defer_to_scope() be confusing if the job is awaited in the scope?
    2. Even if assumed obvious, a note about execution order when there is a mix of awaited and non-awaited jobs is worth adding to the documentation IMHO.

  • I skimmed the latter parts of this post since I felt like I read it all before, but I think moro is new to me. I was intrigued to find out how scoped span exactly behaves.

    async fn slp() {
        tokio::time::sleep(std::time::Duration::from_millis(1)).await
    }
    
    async fn _main() {
        let value = 22;
        let result_fut = moro::async_scope!(|scope| {
            dbg!(); // line 8
            let future1 = scope.spawn(async {
                slp().await;
                dbg!(); // line 11
                let future2 = scope.spawn(async {
                    slp().await;
                    dbg!(); // line 14
                    value // access stack values that outlive scope
                });
                slp().await;
                dbg!(); // line 18
    
                let v = future2.await * 2;
                v
            });
    
            slp().await;
            dbg!(); // line 25
            let v = future1.await * 2;
            slp().await;
            dbg!(); // line 28
            v
        });
        slp().await;
        dbg!(); // line 32
        let result = result_fut.await;
        eprintln!("{result}"); // prints 88
    }
    
    fn main() {
        // same output with `new_current_thread()` of course
        let rt = tokio::runtime::Builder::new_multi_thread()
            .enable_all()
            .build()
            .unwrap();
        rt.block_on(_main())
    }
    

    This prints:

    [src/main.rs:32:5]
    [src/main.rs:8:9]
    [src/main.rs:25:9]
    [src/main.rs:11:13]
    [src/main.rs:18:13]
    [src/main.rs:14:17]
    [src/main.rs:28:9]
    88
    

    So scoped spawn doesn’t really spawn tasks as one might mistakenly think!


  • Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.

    There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.

    Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.








  • First of all, unsafe famously doesn’t disable the borrow checker, which is something any Rustacean would know, so your intro is a bit weird in that regard.

    And if you neither like the borrow checker, nor like unsafe rust as is, then why are you forcing yourself to use Rust at all. If you’re bored with C++, there are other of languages out there, a couple of which are even primarily developed by game developers, for game developers.

    The fact that you found a pattern that can be alternatively titled “A Generic Method For Introducing Heisenbugs In Rust”, and you are somehow excited about it, indicates that you probably should stop this endeavor.

    Generally speaking, I think the Rust community would benefit from making an announcement a long the lines of “If you’re a game developer, then we strongly advise you to become a Rustacean outside the field of game development first, before considering doing game development in Rust”.




  • Federation is irrelevant. Matrix is federated, yet most communities and users would lose communication if matrix.org got offline.

    With, transport-only distributablity, which i think is what radicale offers, availability would depend on the peers. That means probably less availability than a big service host.

    Distributed transport and storage would fix this. a la something like Tahoe-LAFS or (old) Freenet/Hyphanet. And no, IPFS is not an option because it’s generally a meme, and is pull-based, and have availability/longevity problems with metadata alone. iroh claims to be less of a meme, but I don’t know if they fixed any of the big design (or rather lack of design) problems.

    At the end of the day, people can live with GitHub/GitLab/… going down for a few minutes every other week, or 1-2 hours every other month, as the benefits outweigh the occasional inconvenience by a big margin.

    And git itself is distributed anyway. So it’s not like anyone was cut from committing work locally or pushing commits to a mirror.

    I guess waiting on CI runs would be the most relevant inconvenience. But that’s not a distributable part of any service/implementation that exists, or can exist without being quickly gravely abused.