ORMs come with a lot of baggage that I prefer to avoid, but it probably depends on the domain. Take an e-commerce store with faceted search. You're pretty much going to write your own query builder if you don't use one off the shelf, seems like.
I once boasted about avoiding ORM until an experienced developer helped me to see that 100% hand‑rolled SQL and customer query builders is just you writing your own ORM by hand.
Since then I've embraced ORMs for CRUD. I still double-check its output, and I'm not afraid to bypass it when needed.
Not really. ORMs have defining characteristics that hand-rolled SQL with mapping code does not. Eg, something like `Users.all.where(age > 45)` create queries from classes and method calls, while hand-rolled SQL queries are...well..hand-written.
It's amusing to consider how much of a Rorschach test this article must be. But it's a great point, even if it arms us to abusively write off unwelcome ideas as scams. As the author points out, Pascal's reasoning is easily applied to an infinity of conceivable catastrophes - alien invasions, etc. That Pascal specifically applied his argument to the possibility of punishment by a biblical God was due to the psychological salience of that possibility in Pascal's culture - a truly balanced application of his fallacious reasoning would be completely paralyzing.
The authors here are claiming, as your quote states, that biological evolution is just one instance of a more general phenomenon. I'm not sure that's contrary to the views you're expressing. You wrote:
> The expectation that life is somehow special is wrong. There is, as far as we can see, no difference in the quarks in a dog and those in a rock
But the authors' examples do include the "speciation" of minerals! As I read it, the authors describe:
- some initial set of physical states (organisms, minerals, whatever)
- these states create conditions for new states to emerge, which in turn open up new possibilities or "phase spaces", and so on
- these new phase spaces produce new ad hoc "functions", which are (inevitably, with time and the flow of energy) searched and acted upon by selective processes, driving this increase of "functional information".
I don't think it's saying that living things are more complex or information dense per se, but rather, that this cycle of search, selection, and bootstrapping of new functions is a law-like generality that can be observed outside of living systems.
I'm not endorsing this view! There do seem to be clear problems with it as a testable scientific hypothesis. But to my naive ear, all of this seems to play rather nicely with this fundamentally statistical (vs deterministic) picture of reality that Prigogine described, with the "arrow of time" manifesting not just in thermodynamics and these irreversible processes, but also in this diversification of functions.
Making a career out of making the case for air pollution. I hope the money is worth it. This guy should have to live and raise his kids next to a coal plant.
This is a great demonstration of the fact that people coming from very different perspectives can, through good faith inquiry, find much to agree on. I think there are a lot of thoughtful arguments and conclusions in here even though I generally find the catholic church's metaphysical pyrotechnics to be fairly ridiculous. It goes to show that E.O. Wilson's concept of "consilience" can apply even outside of sciences - just as different lines of scientific inquiry converge on a common reality, so can very disparate forms of moral inquiry converge because they both proceed from a shared human experience of what's good and bad in life.
Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.
It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.
I suppose I would challenge the following, which I've also sometimes heard from philosophers:
>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.
My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.
Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.
Yep I think that is an interesting point! I definitely think there are important ways in which human intelligence is embodied, but yeah - if we are modeling intelligence as a function, there's no obvious reason to think that whatever influence embodiment has on the output can't be "compressed" in the same way – after all, it doesn't matter generally how ANY of the reasoning that AI is learning to reproduce is _actually_ done. I suppose, though, that that gets at the later emphasis:
> Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform
One might concede that AI can produce a good enough simulation of an embodied intelligence, while emphasizing that the value of human intelligence per se is not reducible to its effectiveness as an input-output function. But I agree the vatican's statement seems to go beyond that.
As an aside, and more out of curiosity, I want to mention a tiny niche corner of CogSci I once came across on YouTube. There was a conference on a fringe branch of consciousness studies where a group of philosophers hold a claim that there is a qualitative difference of experience based on material substrate.
That is to say, one view of consciousness suggests that if you froze a snapshot of a human brain in the process of experiencing and then transferred every single observable physical quantity into a simulation running on completely different material (e.g. from carbon to silicon) then the re-produced consciousness would be unaware of the swap and would continue completely unaffected. This would be a consequence of substrate independence, which is the predominant view as far as I can tell in both science and philosophy of mind.
I was fascinated that there was an entire conference dedicated to the opposite view. They contend that there would be a discernable and qualitative difference to the experience of the consciousness. That is, the new mind running in the simulation might "feel" the difference.
Of course, there is no experiment we can perform as of now so it is all conjecture. And this opposing view is a fringe of a fringe. It's just something I wanted to share. It's nice to realize that there are many ways to challenge our assumptions about consciousness. Consider how strongly you may feel about substrate independence and then realize: we don't actually have any proof and reasonable people hold conferences challenging this assumption.
It's going to sound rather hubristic, being that I'm just a random internet commenter and not a conference of philosophers, but this seems... nonsensical? I don't understand how it isn't obvious that the new consciousness instance would be unaware of the swap, or that nevertheless the perspective of the original instance would be completely disconnected from that of the new one.
It seems to be a question that many apparently smart people discuss endlessly for some reason, so I guess I'm not surprised by this proposal in particular, but it's really mystifying to me that anybody other than soulists think there's any room for doubt about it whatsoever.
Completely agree. I'm interested in the detour, perhaps as much fascinated in the human psychology that prompt people to invest in these debates as anything about the question itself. We have psychology of science and political psychology and so it seems like a version of that that attempts to be predictive of how philosophers come to their dispositions is a worthy venture as well.
And then Marvin Minsky asked: what if you substitute one cell at a time with an exactly functioning electronic duplicate? At what point does this shift occur?
Sounds like an experimental question. Maybe 99%, maybe 1%, maybe never.
Can you suggest another way to answer your question other than performing an experiment? Can you describe how to perform an experiment to answer your question?
Would you agree to be the subject of such an experiment?
>I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally?
Well, Searle argued against it when presenting the case for the Chinese Room argument, but I disagree with their take.
I personally believe in the virtual mind argument with an internal simulated experience that is then acted upon externally.
Moreso, if this is the key to human like intelligence and learning in the real world, I do believe that AI would very quickly pass by our limitations. Humans are not only embodied, but we are prisoners to our embodiment and we only get one. I don't see any particular reason why a model would be trapped to one body, when they could 'hivemind' or control a massive number of bodies/sensors to sense and interact with the environment. The end product would be an experience far different from what a human experiences and would likely be a super organism in itself.
Experience is biological, analog, computers are digital; that's the core of the problem. It doesn't matter how many samples you take, it's still not the full experience. Witness Vinyl.
This is just so story more than it's an actual argument and I would say it's exactly the kind of essentialism that I was talking about previously. In fact, the version of the argument typically put forward by Anglo-sphere philosophers, and in this case, by the Vatican, are actually more nuanced. The reference to the "embodied" nature of cognition at least introduces a concept that supports a meaningful argument that can be engaged with or falsified.
It could be at the end of the day that there is something important about the biological basis of the experience and the role it plays in supporting cognition. But simply stipulating that it works that way doesn't represent forward motion in the conversation.
I believe parent is referring to the HN crowd's, which interestingly is rather diverse reacting regarding this post (though I could be wrong and they could be referring to the document and its sources).
Either way, I must admit that, as a Catholic I appreciate the great discussion here. There are of course the usual snarky comments you would expect regarding the Church and religion (which is fine by me) but overall it's a well grounded discussion.
I'm personally enjoying reading the thoughtful perspectives of everyone.
Given the scale and variety of transformations in the 20th century - technological revolution, mass urbanization, the integration of billions of new workers into global markets, nuclear weapons, mass media, environmental change, and unprecedented population growth – it would be very surprising if all the graphs just maintained linear trends the whole time. Many of these graphs appear to show continuations - though perhaps at inflection points of exponential growth – of trends already taking place.
My guess is we're supposed to read this sentence as:
Ɐh G(h)
(for all my hats, the hat is green)
or whatever similar formulation:
Ɐx (H(x) ^ M(x)) → G(x)
(for all x, if x is a hat and x is mine, then x is green)
Either way, the general idea will be that negating the statement (making it a lie) will make it a negative existential quantifier:
Ǝh ~G(h)
(there exists one of my hats such that it is not green)
Or in the case of the alternate formulation:
Ǝh (H(x) ^ M(x)) ^ ~G(x)
(there exists an x, such that x is a hat and x is mine, and x is not green)
So I think we answer (A) The liar has at least one hat.
All that said, I think other commenters are rightly pointing out that this relies on a very questionable distinction between semantics - which is what we've formalized above - and pragmatics. In conversational pragmatics, "All my hats are green" means that I have at least one hat (probably at least 3, even, since the sentence didn't say "My only hat" or "Both my hats"). One might explain this by way of an implicit pragmatic conversational principle that all statements should be relevant and informative in some way, which vacuously true statements (like, "all grass growing on the moon is purple") are not (see the "Gricean maxims").
If we don't make this implausible distinction between semantics and pragmatics (implausible to me because it assumes that sentences in general are usefully analyzed as having "propositional" meanings which can be evaluated outside of any conversational context), we might cash out the statement as:
Ɐh G(h) ^ Ǝh G(h)
so we can conclude, since this is a lie, that:
Ǝh ~G(h) ∨ Ɐh ~G(h)
Which is consistent with the liar owning no hats, as in:
It's hard to take too much issue with the general argument. Seems like there are loads of examples of theories developed out of "pure" intellectual pursuits that found practical applications later on.
But I quibble with the broad inclusion of "jazz" in the list. I don't really like the idea that jazz is this never-ending process of avant-garde musical boundary pushing. There are these cults of personality around artists like Miles Davis and Coltrane, and at some point people decided that "easy-listening", "smooth jazz", and "elevator music" were the nadir of "cool", but those particular cultural trends don't necessarily define jazz as a whole. It's also reasonable to regard jazz as having a matured musical vocabulary that we can construct accessible tunes out of without pushing boundaries all the time. I suspect a lot more people do enjoy "lounge" jazz than would admit it.
reply