Usually languages are not the issue. It is the code that we write. As long as languages help us to find/debug a problem caused by crappy code - we should be good. Coding is kinda creative work. There is no standard to measure creativity or pitfalls of using wrong patterns. The incidents & RCAs usually find these. But most of the times it is already too late to fix core problem.
Not sure that I agree... I think some of the worst AI code I've had to deal with and the most problematic are when dealing with Java or C#... I've found TS/JS relatively nice and Rust in particular has been very nice in terms of getting output that "works" as long as function/testing is well defined in advance.
I just tried a portion of the url & it took me to Bangladesh university - http://182.160.97.198:8080/xmlui/bitstream/handle/ . Intersecting. When I go to root of this url, the error messages are listing the softwares this site is powered by. Generally this is not considered a secure way of protecting a site.
This is good. Has anyone tried building any large scale applications entirely using Claude and maintaining it for a while with users paying for it? I’m looking for real life examples for inspiration.
Using AI we can make 1000s of commits per day. This metric becomes even more pointless in the days of AI. If we increase sales, New subscription count, reduced bug count, reduced incidents etc., those can be real metrics. I'm sure I am preaching to the choir.
I have coworkers commiting tens or hundreds of thousands of "lines of code" a week, because they'll push whatever the AI gives them, including dependencies and virtualenvs, without any review.
Of course, at the same time we're getting dozens of alerts a week about services deployed open to the Internet without authentication and full of outdated vulnerable libraries (LLMs will happily add two or three years old dependencies to your lockfiles).
Indeed. It seems, at least in America (I’m less familiar with the situation abroad) that computer science researchers who want to do longer-term work are getting squeezed. Less funding means fewer research positions in academia. Industry has many opportunities, especially in AI, but industry tends to favor shorter-term, product-focused research as opposed to longer-term work with fewer immediate prospects for productization. This is a great environment for many researchers, but researchers who want to work on longer-term, “blue-skies” projects might not find a suitable position in industry these days.
There are still opportunities, but they aren’t paid nearly as well as less researchy positions in industry. US post-doc salaries at state universities aren’t that high.
Curious to know who will spend this much money without external funding? Would you spend any VC invested money into this nameless brand? Are there any guardrails or clauses to protect the kind of expenses?
reply