Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ... because we have no industry wide, respected entrance exam.

This. In fact there needs to be one exam, with subsections and subscores per subsection, along with an overall score.

Being able to solve an algorithmic challenge on a whiteboard does not mean that you can:

a) write readable and maintainable code

b) effectively communicate requirements to whoever is actually running your code in production

c) know about application-level security vulnerabilities like SQL injections and know not to mindlessly insert them into your code

d) instrument your code for metrics and logging in ways that are more or less standard and expected for production monitoring

e) engage in effective review of others' code

f) use typical API handling techniques which are production-ready, e.g. exponential backoff and circuit breakers.

All of which are testable in a certifiable way.

Simple truth of the matter is, the vast majority of companies that hire 10 developers who studied efficient algorithms for their interviews, are better served by hiring 9 engineers who are aware of how production and organizational realities affect their work, plus one "performance hacker" who may or may not have a master's degree in CS, whose job is to find the worst real-world production bottlenecks and replace the implementations with algorithmically more efficient implementations (plus stricter performance tests so that the more efficient code doesn't get ripped out later).



That's an interesting conclusion and it makes pretty sense to me. I've heard that most projects fail due to organizational failures rather than development performance failures. It's probably better to invest in process/quality-aware developers than in rockstar programmers. But I've never read studies about that.


How would you test, "in a certifiable way", just the first of your examples "write readable and maintainable code"?


The SAT has a writing section - there is no reason you can't write code on a standardized test.

The idea is to assign low points to people who write a long function, and high points to people who write many small, self-documenting functions. Such an exam could also ask questions like "what is the cyclometric complexity of the following code?", which while not an indicator that the test taker will always write low-complexity code in a professional environment, at least indicates that the test taker is aware of maintainability metrics like cyclometric complexity - much more than can be said of programmers who mindlessly turn in 400 line functions with deeply nested conditionals.


You might not test this. Keep in mind, the bar exam, medical boards, actuarial exams, and other entrance exams aren't intended to establish competence in all areas of professional practice.

I think that the google style data structures and algorithms exams are a good case in point. Think of these like the actuarial exam for linear algebra, vector calc, and numerical analysis. These exams don't of course test everything important about being an actuary. But they do establish competence in something that can be tested. As a result, actuaries don't (to my knowledge) have to do 5 hours of whiteboard exams doing LU matrix decomposition or finding a steepest descent vector. Whereas software developers have to find all matching subtrees in a binary tree over and over (and over).

What I like about the actuarial exams is that while they are rigorous, and it helps immensely to have majored in math or something closely related, you are free to decide how to prepare for these tests. Although I like the idea of a widely recognized entrance exam, I really don't like the idea of something like the law schools putting themselves (and 200k in debt) in between an individual and the right to take the exam.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: