I take it you don’t understand how startups work?
OpenAI is not making any profit and is losing money hand over fist today. Valuation and raising investment rounds isn’t profit.
I take it you don’t understand how startups work?
OpenAI is not making any profit and is losing money hand over fist today. Valuation and raising investment rounds isn’t profit.
Eh? That article says nothing about their profit margins. Today they have something like $3.5B in ARR (not really, that’s annualized from their latest peak, in Feb they had like $2B ARR). Meanwhile they have operating costs over $7B. Meaning they are losing money hand over fist and not making a profit.
I’m not suggesting anything else, just that they are not profitable and personally I don’t see a road to profitability beyond subsidizing themselves with investment.
OpenAI is burning billions of dollars not making profit.
I live upon morsels you happen to drop
This regulation (and similar being proposed in California) would not be applied retroactively.
Many (14?) years back I attended a conference (now I can’t remember what it was for, I think a complex systems department at some DC area university) and saw a lady give a talk about using agent based modeling to do computational sociology planning around federal (mostly navy/army) development in Hawaii. Essentially a sim city type of thing but purpose built to help aid in public planning decisions. Now imagine that but the agents aren’t just sets of weighted heuristics but instead weighted heuristic/prompt driven LLMs with higher level executive prompts to bring them together.
I fully agree with this, would have written something similar but was eating lunch when I made my former comment. I also think there’s a big part of pragmatics that comes from embodiment that will become more and more important (and wish Merleau-Ponty was still around to hear what he thinks about this)
A lot of semantic NLP tried this and it kind of worked but meanwhile statistical correlation won out. It turns out while humans consider semantic understanding to be really important it actually isn’t required for an overwhelming majority of industry use cases. As a Kantian at heart (and an ML engineer by trade) it sucks to recognize this, but it seems like semantic conceptualization as an epiphenomenon emerging from statistical concurrence really might be the way that (at least artificial) intelligence works
For marketing emails sure but for transactional emails you usually don’t have to warm IPs, just have the various email security things setup mentioned above
That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on