* feat: depend on picos, use picos.exn_bt
* refactor: remove dla
* non optional dependency on thread-local-storage
it's a dep of picos anyway
* wip: use picos computations
* disable t_fib1 test, way too flaky
* feat `fut`: wrap picos computations
* detail in fut
* gitignore
* refactor core: use picos for schedulers; add Worker_loop_
we factor most of the thread workers' logic in `Worker_loop_`,
which is now shared between Ws_pool and Fifo_pool
* github actions
* feat fut: add `on_result_ignore`
* details
* wip: port to picos
* test: wip porting tests
* fix fut: trigger failing to attach doesn't signal it
* fix pool: only return No_more_tasks when local and global q empty
* format
* chore: fix CI by installing picos first
* more CI
* test: re-enable t_fib1 but with a single core fifo pool
it should be deterministic now!
* fixes after reviews
* bump minimal OCaml version to 4.13
* use `exn_bt`, not `picos.exn_bt`
* feat: optional dep on hmap, for inheritable FLS data
* format
* chore: depend on picos explicitly
* feat: move hmap-fls to Fiber.Fls
* change API for local FLS hmap
* refactor: move optional hmap FLS stuff into core/task_local_storage
* add Task_local_storage.remove_in_local_hmap
* chore: try to fix CI
* format
* chore: CI
* fix
* feat: add `Fls.with_in_local_hmap`
* chore: depend on hmap for tests
* fix test for FLS
use the inheritable keys
* chore: CI
* require OCaml 4.14 :/
* feat: add `moonpool.sync` with await-friendly abstractions
based on picos_sync
* fix: catch TLS.Not_set
* fix: `LS.get` shouldn't raise
* fix
* update to merged picos PR
* chore: CI
* fix dep
* feat: add `Event.of_fut`
* chore: CI
* remove dep on now defunct `exn_bt`
* feat: add moonpool-io
* chore: CI
* version constraint on moonpool-io
* add Event.Infix
* move to picos_io
otherwise a fairly vicious bug happens: the await-er is resumed on the
current runner, not its native one, which can cause deadlocks as it
breaks the executors' dependency DAG. When using `resume` there is no
bug since `resume` is designed to always schedule on the correct runner.
- we differentiate between starting a task and resuming a task
- we dynamically find if we're on one of the pool's runner
in `resume`/`run_another_task` in the main suspend handler
(this way we can use the local work stealing queue
if we're in the same pool, even if we're not on the
worker that ran the "suspend" call itself)