this week has been dominated by LLM announcements, llama3, mixtral 8x22b, idefics, rika. llama is cooking a 400b model, zuck kinda slipped that it will be a dense model, what’s the trick when everyone is doing MoE?
stanford ai index and report show a good effort in open-sourcing a lot of models, although the dominance of USA looks a bit concerning from EU pov
watched Yannic’s video on Flow Matching, wondering how the finding of the linear interpolation between data and noise connects to iterative $\alpha$-deblending …
off-topic
i was an invited speaker at co.scienza to discuss about ai and ethics. main point was about the monopoly that current companies are seeking, and centralization of power. additionally current ai system, since they are not rational, they cannot be held accountable of decisions