recursivecaveat 3 days ago

I always thought that the Alan Kay model is fundamentally misguided, in that it is explicitly inspired by cell biology and distributed computer systems, ie extremely hard problems. Basically all the hardest things to model, predict design, and debug are these kind of bottom up systems where all interesting behavior comes from the emergent runtime interaction of tons of tiny components. This works okay for cells because there is no programmer floating above the system trying to understand it and make specific changes, but until we start programming by natural selection I wouldn't describe it as a good paradigm for organizing behavior.

I much prefer my programs to have a sort of top down structure, more like a military instead of an economy. Obviously late-binding and dynamic behavior is many times necessary, but I would not lean in to it, in the same way I would not say, make all my variables global just because sometimes you need it.

  • nine_k 2 days ago

    I would say that Alan Kay was trying to create something like Erlang, if you look at the early plans, and the terminology like "messages". But due to hardware limitations of the time, he ended up with a much less capable system, lacking the asynchronicity and heterogeneity.

    Erlang appeared 15 years later, and was / is quite successful in certain niches, more recently also as Elixir.

  • bmitc 3 days ago

    But many times, such top-down structure cannot work, especially if your program interacts with external systems, such as hardware. In such systems, you need a model in your program that is very similar to biological systems. For example, a module that interacts with hardware needs to be self-sufficient and asynchronous, interacting with the rest of the system through messaging. Even if there is a top-down structure at the higher-level, such structure cannot fully dictate how the hardware behaves or performs. For example, someone outside of your program could go off and change settings on the front panel.

  • armitron 3 days ago

    Top down architecture doesn’t scale and puts a hard limit on the problems one can tackle before complexity explodes. The Internet, the largest distributed system we have, is based on bottom-up cell-like biologically inspired models. Kay was prescient and decades ahead of his time.

  • pakl 3 days ago

    IMHO (from the viewpoint of a neuroscientist) the biological inspiration is quite measured and restrained in his work…

    The problem he was proposing we solve is computing with heterogenous “machines”. This doesn’t preclude the regimented organization you are favoring, above.

    Please see my other comment on call-by-meaning.

  • atomicnature 3 days ago

    What do you think of the Internet? Has it worked? Has it scaled? Is it reliable?

    Remember that Alan Kay, his team and his colleagues had a lot to do with this thing you are using to rant here :)

    • lmm 2 days ago

      > What do you think of the Internet? Has it worked? Has it scaled? Is it reliable?

      The internet has certainly grown and does certain things reliably, but it's also extremely difficult to control or change. For most business purposes, producing something like the internet would be a failure.

      > Remember that Alan Kay, his team and his colleagues had a lot to do with this thing you are using to rant here :)

      Given how difficult his ideas apparently are to convey and how little the programming paradigms that actually get used to make things embody them, I find that pretty dubious. E.g. didn't he explicitly disavow C++?

      • atomicnature 2 days ago

        Why'd anyone judge Alan Kay on making any particular biz succeed? He's been a big contributor to the "ideas ecosystem" as a researcher's researcher.

        It's like asking Bezos to care about the Internet as a technical concept ("Internet? Schminternet? I don't care as long as it delivers the best customer experience!").

        Kay was and still is a dedicated researcher - and he has had great influence on people like Jobs/Bezos as well - behind the scenes. There are very few parts of modern technology untouched by his ideas. He's a Turing Award winner for a reason, you know.

        Re C++ - Kay's job as a researcher is not to defend whatever sh*t humans have figured out & are content with but to paint a picture of the future, build teams around it and demonstrate prototypes. He has done that exceedingly well I'd say.

        • lmm a day ago

          Well, put it this way: in a world where Kay was a charlatan (not in the sense of deliberately lying, but in the sense of having just flash and no substance) who was in the right place at the right time, what would be different? How can I tell?

          • atomicnature a day ago

            Newton spent 1/3rd of life pursuing Alchemy, another 1/3rd pursuing biblical prophecies. Paul Graham once said that Newton wasted so much of his life (basically PG thinks he knows what's worthy work and what is not - I mean exploring the factuality of "God" must obviously be nonsense - right?). Maybe you'd call Newton a Charlatan too - because hey - what biz did he make successful.

            I don't think either PG or most people on this forum are fit enough to judge what Kay is saying is of value or not. And I say this with utmost humility. These are scholarly researchers, who operate at the edge of human knowledge/insight. Who am I to judge Newton or someone of that calibre - question why he's interested in Alchemy? Or Kay says particular things which seem far-fetched to you or me.

            Continuing with the Newton analogy - I think one aspect with CS/Software is that it is a new field - you should not apply the definition of a "Charlatan" you'd use in a more established field such as Physics. Kay's agenda is to advance a nascent field. Just like Newton was dabbling with Alchemy before the establishment of the modern field of Chemistry.

            • lmm a day ago

              The vast majority of people who worked on alchemy and biblical prophecies were either charlatans, or, at best, people who diligently researched the wrong thing. Their names are rightly mostly lost to history.

              Newton earned the respect we give him with his theories of gravity and calculus. He made concrete explanations that other people were able to understand and build on - even his enemies, who were many, could not deny the correctness of his results. And the whole edifice of science, engineering, and industry is built on that work - even today, Newton's laws of motion are something people have to learn and understand, and a lot of people do understand, and teachers distinguish between people who understood them and people who did not, and test whether people's understanding of them is correct or not. People judge Newton's work all the time, and they are right to do so.

              Did he spend a lot of his life on bullshit? Yes, probably. So does PG, so do any number of Nobel Prize winners. But we don't remember his name because of his work on alchemy and biblical prophecies. There's a huge difference between someone who did some productive work and some bullshit, and someone who only or mostly did the bullshit. And if you take the position that you aren't fit to judge who's a genius and who's a charlatan and you're not even going to try, you're going to get taken for a ride.

              • atomicnature a day ago

                So you are saying Alan kay is 100% bullshit? Or 50-50 sense/bullshit? 25-75? What is the percentage that makes someone a crank or not? Because newton has a 30/70 score by this metric.

                In my book Kay makes lots of sense most of the times, if you put in the effort. Maybe you see it totally differntly.

                Another thing - with Newton for ex - I think the way he studied these other subjects - was still very honest, very sincere, and he made great efforts to get things right. So I'd say Newton has maintained methodological integrity throughout regardless of results (quality of efforts > quality of results). My view with Kay would be the same - I think he has high levels of integrity. I can cite why I think so, but then this thread is taking too long already.

                • lmm a day ago

                  > So you are saying Alan kay is 100% bullshit? Or 50-50 sense/bullshit? 25-75? What is the percentage that makes someone a crank or not? Because newton has a 30/70 score by this metric.

                  I don't think it's a percentage, I think we evaluate people on the non-bullshit they've done rather than the bullshit. And as far as I can see Kay hasn't done much that's valuable - I mean, I think there's merit in Smalltalk, but the parts of its design that I think are good are disjoint from the parts that Kay talks about. To the extent that the things he's said convey meaning they tend to be wrong - object orientation has failed in multiple incarnations, late binding has failed, live systems have failed, etc..

                  > In my book Kay makes lots of sense most of the times, if you put in the effort. Maybe you see it totally differntly.

                  I do. Some of the gnomic statements he's made have been retrospectively interpreted to mean things that make sense and are useful (e.g. "oh, obviously he meant actors"). But as far as I can see no-one ever managed to interpret them in a way that made sense and contributed to building something useful ahead of time - it's more of a Nostradamus situation than him having actual insight.

                  I mean, I assume he hasn't achieved literally nothing his whole life, that at some point he's done research that contributed to something useful. But I've reached the view that all the stuff he's famous for, all the stuff that people quote, is bullshit.

                  > I think the way he studied these other subjects - was still very honest, very sincere, and he made great efforts to get things right. So I'd say Newton has maintained methodological integrity throughout regardless of results (quality of efforts > quality of results).

                  The thing is, it's much harder to judge efforts than results, so it's easy for a charlatan to look like they were making high-quality efforts. I'm willing to trust that Newton had methodological integrity because he was able to produce great results, and so I'm willing to accept that the efforts that lead to that carried over to other parts of his life (not that I think it actually matters either way - if what you're studying is fundamentally rotten from the start then an investigation with higher methodological quality is a castle on sand). You have to be a lot more sceptical if you don't have that proof that the person is at least capable of high-quality efforts.

                  • igouy a day ago

                    > … object orientation has failed in multiple incarnations, late binding has failed, live systems have failed, etc

                    What's your definition of failed?

      • TZubiri 2 days ago

        "it's also extremely difficult to control or change"

        Hence its success.

        It is controllable and changeable exactly in the manners that are appropriate by those who should be able to control it and change it.

    • afiori 2 days ago

      Conway's law applies to the internet too.

      The internet is as distributed and decentralised as it is because it was made by a distributed and decentralized entity (thousands of companies and individuals).

      Had most of them been willing to coordinate and cooperate with each other we would have a very different internet.

  • cxr 3 days ago

    This is also why the "computer science" label (until now) never really made sense for traditional programming; you start from an intention to reach a goal, and contrive a system that can achieve it, generally understanding the means by which it is accomplished from end to end—or at least you have the option of zooming in from a given level of abstraction to a lower one to work out how the pieces fit together there and play their part in getting us where we're aiming at the highest level. Science isn't that. Science is what humanity has to resort to when a thing is not knowable by other means—the preferred from for modification is not at hand. Generally, when someone is doing something akin to science where traditional software development is concerned, it's regarded as sloppy—that you need to stop goofing around and actually do the work of understanding and reasoning about what you're dealing with instead of poking at it and making observations and inferences.

    This is different now with black box systems like LLMs and other neural networks (transformer-based or not) that we don't understand because they were never actually designed from a blueprint that lends itself to an understanding by others that approaches the understanding of the originator(s).

    There's an argument to be made that our avoidance of systems that call for actual science and our attraction to focusing on the smaller subset consisting of grokkable systems, out of a larger set of possible ones, is an indication of immaturity. It's parochial and blinkered. We are big fish in a small pond.

    • threatofrain 3 days ago

      > Science isn't that. Science is what humanity has to resort to when a thing is not knowable by other means—the preferred from for modification is not at hand.

      Science is description and explanation on top of empiricism. It is the first means by which people understand things, not the last, as formal methods came way late.

      This drive to properly name things also gets into the somewhat similar debate of whether math is discovered or invented. And somewhere someone is trying to determine whether it's appropriate to call math as science, art, or engineering.

      • saghm 3 days ago

        > Science is description and explanation on top of empiricism. It is the first means by which people understand things, not the last, as formal methods came way late.

        I don't think that's at odds with what the parent comment said; the reason we use empiricism and description for analyzing reality is because we didn't create it and we don't know the rules beforehand. When designing a software system, you _choose_ the rules of which things interact and which things don't, and how those interactions occur; there's no need for empiricism in order to discover these interactions. We don't necessarily need to use science to understand our software systems because we rule over them by fiat and can choose to design them in ways that make it easier for us to understand them.

      • azinman2 2 days ago

        Science is a method designed to get closer to truth. That’s all.

        • threatofrain a day ago

          Science is distinguished from religion, pure metaphysics, and other approaches to truth through a focus on empiricism and a formalization of explanatory methods.

  • why-el 3 days ago

    To be inspired by something really has no bearing on how the _inspired_ thing is built. I think you place much emphasis on that but really it does not bring much to the argument. One can say that a plane was inspired by a bird (and it is, since we wouldn't to build one if we didn't see birds flying), but a plane is not designed like a bird.

    I also somewhat contest that "interesting behavior comes from the emergent runtime interaction of tons of tiny components". There can be very tight, hierarchical structures to programs designed in the way Alan Kay talks about. He is promoting clear, manageable interactions, not emergent unpredictability, which is something I am sure you came across (we all did), but I would not go so far as describe the whole model of Kay as "fundamentally misguided". He talks of "clear fences", which can be understood to refer to things like "actor-based" models, with controlled, clear messages, as done in languages such as Erlang.

  • DrScientist 2 days ago

    > but until we start programming by natural selection I wouldn't describe it as a good paradigm for organizing behavior.

    But aren't we? The modern ML revolution is programming via natural selection. Emergent behaviour from complex interactions of simple components trained by selection.

    And of course, experience underlines how hard these programmes are to reason about.

    However, seems to me, the key trick of these systems is that very complexity that makes them so hard to understand is what gives them their computational power.

  • TZubiri 2 days ago

    "This works okay for cells because there is no programmer floating above the system trying to understand it and make specific changes."

    Physicians.

    "but until we start programming by natural selection I wouldn't describe it as a good paradigm for organizing behavior."

    Machine learning, alternatively startups selected by market forces.

  • panarchy 3 days ago

    "until we start programming by natural selection"

    We already have reinforcement and other types of machine learning?

  • alexashka 3 days ago

    What is your argument, besides personal taste?

    Something being hard is not an argument for or against anything.

    Alan Kay is misguided because he prefers a hard thing and you prefer a simpler thing?

    • lmm 2 days ago

      > Something being hard is not an argument for or against anything.

      A paradigm being hard to get things done in is definitely an argument against the value of that paradigm.

      • alexashka 2 days ago

        Right, because Alan Kay is suggesting we do the hard thing when we can instead do the easy thing and achieve a similar outcome.

        That Alan Kay guy sure is a bit of a dummy huh.

        • lmm a day ago

          This but unironically.

Animats 3 days ago

Kay's ideas about "messaging" were never communicated well.

He seemed to be arguing for a pure imperative style. You send a message to make something happen. This is the opposite extreme from pure functional programming. A style of programming where everything is an imperative statement with no return value might be possible.

GPU programming is kind of like that. Neural nets are like that. There's an analogy to biology, which doesn't seem to do function calls but does push information around. That apparently appealed to Kay.

Functional programming makes things local. Pure imperative programming makes things global. This is hard on human programmers. It takes more short-term memory than most humans have to work in pure imperative mode.

Kay was heavily into discrete-event simulation. That's what Simula had objects for, and that's where Smalltalk got objects. All those objects are sending messages to each other, driving the simulation forward. The original Smalltalk paper has discrete-event simulation examples. It's possible to build a GUI as a discrete-event simulator, with all the widgets sending one-way messages to each other, but that's seldom done. Discrete-event simulation became a tiny niche in computing. Kay thought of it as a central organizing concept for computing. That's not where the field went.

  • ninetyninenine 3 days ago

    > Neural nets are like that.

    No neural nets are functional in nature. Each neuron is a combinator which is the fundamental and most modular unit of abstract computation. The net is simply a composition of these combinators.

    Training the neuron though is imperative because data is discrete. You have to train one piece of data after another.

    >GPU programming is kind of like that.

    Not true. See futhark. What's going on here is that the high level language is imperative. Why is the high level language usually imperative? Because the implementation of computing is usually imperative. Assembly whether it's for the CPU or GPU is imperative by implementation.

    But the high level abstraction on top of these things don't necessarily need to be imperative.

    >Functional programming makes things local. Pure imperative programming makes things global.

    What do you mean by this?

  • TZubiri 3 days ago

    >Kay's ideas about "messaging" were never communicated well.

    Yeah, he just started one of the most popular programming styles that is still taught in universities (even if a different version than the one he envisioned)

    > He seemed to be arguing for a pure imperative style. You send a message to make something happen. This is the opposite extreme from pure functional programming. A style of programming where everything is an imperative statement with no return value might be possible.

    Agreed, OOP is orthogonally opposite of pure functional programming. Objects have state. Big revelation.

    >Functional programming makes things local. Pure imperative programming makes things global. This is hard on human programmers. It takes more short-term memory than most humans have to work in pure imperative mode.

    Imperative programming =/= OOP, are you even aware of OOP and its relation with Alan Kay? Not sure if I should argue anything here, but in summary, OOP doesn't make things global, it precisely limits the knowledge and effect of objects, its originally inspired by cells, which have cell walls and communicate via specific hormones with other cells. Have you ever used a language with private modifiers to variables or something?

    On simulation, it's worth noting that OOP was developed when the prevailing architecture was a full-local monolith, with the advent of the internet the prevailing architecture was client-server and microservices. In this context OOP becomes the default, and objects no longer need to be simulated, but are natural objects in the world. A GUI is an object nowadays, some HTML running in a browser, and the server is another object, a server running in AWS.

    > Kay thought of it as a central organizing concept for computing. That's not where the field went.

    Absolutely the opposite, the only central organizing concept would be the scheduler/simulator I guess? Since machines were usually single processor machines, there was a central abstraction that allowed independent objects to exist, processes for example are designed to be separate and independent, you can separate them across machines or run them in the same machine without much difference, you wouldn't argue that processes are a form of centralized computing? It's a feature designed precisely for the opposite, independence of compute and memory fractions. The fact that many independent processes/objects run in the same computer and that it somehow needs to allocate compute resources between them, is not a central architecture.

    Are hypervisors and Virtual Machines a centralized computing architecture? That's just ridiculous. It's quite the opposite. There's a federated layer in VMs, in processes and in Object Oriented Languages yes, but it is completely practical removable and splittable into separate physical layers due to the nature of its design.

    • Animats 3 days ago

      Objects, as implemented today, are mostly a scoping mechanism to package data with functions that work on it. But that wasn't entirely what Kay was proposing. He wanted objects to "send messages" to each other, as if they were nodes in a distributed system. Hence the "message" terminology Discrete-event simulators really work that way, but not much else does. OOP is two way - class functions return values.

      (I got this early view when I had a tour of PARC in 1975, and Kay explained his thinking.)

      • kragen 3 days ago

        to a significant extent you can do #doesNotUnderstand: in python, ruby, clos, or spidermonkey javascript†, which was the extent to which that kind of completely dynamic message sending was implemented in smalltalk-76. (smalltalk-72 was more radical.) you can think of the synchronous implementation of message sends as dynamically-dispatched subroutine calls (already more or less present in smalltalk-72) either as a helpful convenience or as a fatal compromise of kay's pure actors-like model

        it's true that not many systems really depart from that tradition and go fully asynchronous: only erlang, stackless python, orleans, golang, current ecmascript with promises or web workers, python with asyncio, backends connected together with kafka or rabbitmq or ømq, and a few others. and for hysterical raisins their asynchronous tasks aren't called 'objects'. but i don't think it's really true that that style of programming is entirely limited to discrete event simulators!

        ______

        † __getattribute__ or __getattr__, method_missing, no-applicable-method, and __noSuchMethod__ respectively

        • TZubiri 2 days ago

          "not many systems"

          "only erlang, stackless python, orleans, golang, current ecmascript with promises or web workers, python with asyncio, backends connected together with kafka or rabbitmq or ømq, and a few others."

          That's a lot.

          I do agree with both that Objects as used in programming languages is a very limited definition and not quite what Kay had in mind.

          Kay Objects really exceed even all of the examples quoted above, which are plentiful.

          Take a bank for example. Your system may communicate with a bank by using a stripe API or by sending an ACH file to process some transactions. The bank may take the transaction and process it only returning a response, in a somewhat functional request-response fashion. But they might also, by their own volition, send their own messages to the originator, for example a chargeback. They may even send messages unrelated to a specific message, like a request for documentation.

          From a technical standpoint, any API system that requires a callback address is probably because they need to send their own messages back, in that case there is a bidirectional channel of communication, and we are talking about Kay objects.

          A feature of this interpretation of Kay Objects is that they are not necessarily computer systems, a bank is a juristic entity, its barriers of communication are human as well, they have NDAs and contracts which are not unlike code. They protect their internal state and data, and have specific channels of communication.

          • Animats 2 days ago

            > I do agree with both that Objects as used in programming languages is a very limited definition and not quite what Kay had in mind.

            Yes. Kay was trying to envision a sort of object oriented nanoservices architecture, decades too early to build it. Arguably, CORBA came close to that. You create a local proxy object which has a relationship with a non-local object, and talk to the proxy to get the remote object to do things.

            Interestingly, there's a modern architecture for distributed multiplayer games which works that way - M2, from Improbable. In-game objects talk to other objects, some of which are on different machines. The overhead and network traffic within the server farm are very high, because there's so much communication going on. It's only cost-effective for special events. But it does work.

      • TZubiri 2 days ago

        Don't you think Kay Objects are very present in distributed microservices architectures? Services which provide APIs as the only way to interact. Some even require consumers to register their own servers for callback and require the implementation of callback functions.

        Without going much further Server-Client architecture presents characteristics of Kay objects, if only because their physical separation requires limitation of the control between server and client for security concerns.

        Multitenancy of machines also forwarded Kay Objects in parallel due to security concerns, first OS processes and then stricter virtual machines enforced independency of these objects and allowed communication through strict long-range protocols like TCP in the case of VMs.

        I feel Kay pushed for objects at the application level and this was largely redundant with Operating system level concepts like scheduling, user and kernel layer memory protection. Threads and containers proved that there is a need for a more tightly controlled scheduler and resource sharing, but in general Kay's objects nowadays just use strong encapsulation mechanisms at the OS layer such that objects usually communicate via network protocols as if they were in separate machines altogether, they truly are separate physical objects running independently.

        It is important to consider the ideas of Kay in their time context, preemptive scheduling was a young concept, and processes back then did not have much protection against memory accesses. Of course the scarcity of resources (compute, memory) back then was also a factor to push for application level encapsulation, but nowadays we can just spin up virtual machines and throw metal into some datacenters, there is a surplus of hardware so there is no incentive to replicate and optimize hypervisors, so they don't move to the application layer at all. Turns out all of those security features are really important in guaranteeing encapsulation, you don't even have to worry about whether there is a bug leaking state, because that is taken as a security concern, and the barriers are designed to be protected against skilled attackers, so random bugs are much less likely to break encapsulation.

        Application level objects are still very much used, to my knowledge in simulation software including games, where it would be unreasonable and unnecessary to spin up a VM for each butterfly in a simulated world. But it turns out that in business, Kay Objects are usually assigned to a programmer or to a team of programmers, so there's rarely situations where a programmer is in charge of more than one object, and they need to play a dissociated god controlling designing many entities, and when we do, we inevitably suffer from an identity crisis. And we use harder abstractions like processes or servers anyways. No need to fit multiple kay objects into a single process, that usually causes way too many objects. It's desirable to assign some cost and administrative overhead to object creation to avoid Programmer Ego Death.

    • Twey 2 days ago

      > Agreed, OOP is orthogonally opposite of pure functional programming. Objects have state. Big revelation.

      I don't think that's quite true. Having state or not isn't a dichotomy but a continuum about the size of the scope of the state. Objects (in either the Kay sense or the Java sense) exist to encapsulate state, to limit its scope and make it easier to reason about. That puts OOP (state is local to objects, and can be cleanly reset by destroying and recreating the object) somewhere between ‘pure imperative’ (only global state; there is no reliable way to reset the state) and ‘pure functional’ (state is limited to being kept in function arguments and return values, and is reset on each function call) on the continuum.

smallstepforman 3 days ago

Clear as mud. No matter how good Alan Kay is, he failed to properly describe messaging, as used in an Actor environment. He missed the Actor Programming model. Also the late Carl Hewitt failed to properly explain and implement a working Actor model. A shame, since there are many working Actor implementations in many languages.

  • emmanueloga_ 3 days ago

    Kay’s ideas are definitely interesting, but they can feel pretty vague. For example, what are these "fences" or "metaboundaries" he keeps mentioning? They probably aren’t anything like type checking since he seems to love dynamic typing and late binding. Did either Smalltalk or Squeak implement any of these "metaboundaries" at any point after this 1998 message?

    When it comes to "messaging," it usually just boils down to method dispatch or large switch statements. It doesn’t seem like some magical concept we haven't figured out yet; it’s more like something we already know. When I see Kay's complaining about messaging, I imagine he also complaining about other things: "WE NEED BETTER WHEELS", or "WE NEED BETTER ELECTRICITY" (?). What do you actually want, Alan? :-p

    From my experience with large Ruby codebases and publish/subscribe systems, debugging can become quite messy when there’s too much flexibility. I think this is what Kay is getting at, even if he maintains the idea that a dynamic system like Smalltalk will somehow evolve to fix these issues.

    • mpweiher 3 days ago

      > Kay’s ideas are definitely interesting, but they can feel pretty vague

      They seem vague because they are research questions. Tough research questions.

      > [messaging] usually just boils down to method dispatch or large switch statements.

      And that's the problem.

      > [not something] we haven't figured out yet;

      Well, we obviously haven't figured it out yet, because it ain't large switch statements or (just) method dispatch.

      > debugging can become quite messy when there’s too much flexibility.

      Exactly what he's talking about! Languages like Ruby have the metaprogramming flexibility, but they are lacking in the security of meaning department.

      Languages like Go are pretty good in the security of meaning department, but lacking in the flexibility/expressiveness department.

      So far, we have achieved either/or. He is saying what we need is both.

      It's a tough problem.

    • cxr 3 days ago

      > Kay’s ideas are definitely interesting, but they can feel pretty vague.

      I agree on the whole, but I think he followed through this time. He gave pretty a set of cogent examples that doesn't leave the whole thing coming across as incoherent (like a mystic we're supposed to revere and take their words as some form of high wisdom that would make sense if only we could attain the requisite form of enlightenment). Viz:

      > I would say that a system that allowed other metathings to be done in the ordinary course of programming (like changing what inheritance means, or what is an instance) is a bad design.

      (There are two things being communicated here—what sorts of things he means when talking about transgressing the metaboundaries, and a position about whether it's a good idea to do it willy nilly—with his position on the latter being: No. The former seems clear enough and his take on the latter is definitely reasonable and might even qualify as "wise".)

    • TZubiri 3 days ago

      It's important to note here that he is talking not about OOP concepts at a base level, but rather he is talking about designing OOP programming languages.

      So we are seeing discussion about how to program a programming language. I

      "When it comes to "messaging," it usually just boils down to method dispatch or large switch statements. It doesn’t seem like some magical concept we haven't figured out yet; it’s more like something we already know"

      This feels like the Seinfeld effect, it sounds obvious in hindsight yes, but it's precisely because he was the pioneer, things like Java, microservices, json, APIs, have evolved from Kay's ideas.

      • taffer 3 days ago

        > it's precisely because he was the pioneer, things like Java, microservices, json, APIs, have evolved from Kay's ideas.

        Not to mention Excel, which uses cells, a concept invented by Alan Kay. He also invented OOP which in 1964 inspired the creation of Simula, the first OOP-Language.

        • Rochus 2 days ago

          > He also invented OOP which in 1964 inspired the creation of Simula

          Not in this universe ;-)

          See e.g. https://ethw.org/Milestones:Object-Oriented_Programming,_196...

          • TZubiri 2 days ago

            Yeah, and Columbus didn't discover america, and windows did not invent window Interfaces, Notch did not invent minecraft, and Mullenweg didn't invent wordpress.

            These are still THE most popular contributors to the subject by far, especially by measure of popularity.

            • Rochus 2 days ago

              In any case, he is a good storyteller.

    • Twey 2 days ago

      I interpret a ‘fence’ here to mean a hoop the programmer explicitly has to jump through — something like Rust's `unsafe`. It doesn't need to be difficult to do but it should be difficult to do by accident :)

  • fidotron 3 days ago

    If you want his view on Actors then the conversation with Joe Armstrong is enlightening.

    The main common ground they share is that CSP becomes too synchronized and particular, making it too difficult to use for systems in the large.

  • nabla9 3 days ago

    He is not describing Actor Programming model. He describe his own model.

    Message passing in Smalltalk predates Hewitts Actor Model and was used as inspiration. The messaging in Smalltalk is little different and IMHO better in many cases.

    • mpweiher 3 days ago

      But he's not describing Smalltalk. He is describing what he wanted Smalltalk to evolve into. Which he didn't know how to do, because otherwise we would have it by now.

      This quip by him from OOPSLA '97 is well-known:

      I made up the term object oriented. And I can tell you I did not have C++ in mind..

      A little less well-known are the words that immediately follow:

      So, the important thing here is: I have many of the same feelings about Smalltalk

      https://youtu.be/oKg1hTOQXoY?t=634

      • Phiwise_ 3 days ago

        Why would you a hypothetical quote of Kay for cutting off the full context that he also has criticisms of Smalltalk, and then cut yourself before he specifies that what he's not committed to is the syntax and library system, while the message-passing execution model is the important thing he's trying to promote? That just muddies the waters more. This email was sent a year after OOPSLA 97, so clearly he can't have been talking about messaging as Smalltalk's problem.

        As for where he wants Smalltalk to go, that's what Squeak was for. He talked about it on plenty of occasions, at least one of which was also before OOPSLA, and actually did get a research team together to develop it out in the late 2000s: https://tinlizzie.org/IA/index.php/Papers_from_Viewpoints_Re...

        • Rochus 2 days ago

          The original Smalltalk in 1972, the language of which Kay designed, indeed had some kind of message passing (even though it was synchronous, the receiving object interpreted messages composed of tokens). Smalltalk-76, essentially designed by Ingalls, who was also the sole author of the 1978 publication, made a fundamental shift towards compiled virtual methods, essentially as it was done in Simula 67 and adopted by C++ (though much less efficient). So yes, it makes pretty much sense when Kay claims that he didn't have C++ nor Smalltalk in mind when talking about OO. See also https://dl.acm.org/doi/abs/10.1145/3386335.

  • mpweiher 3 days ago

    It thought the final paragraph was very clear:

    I would suggest that more progress could be made if the smart and talented Squeak list would think more about what the next step in metaprogramming should be - how can we get great power, parsimony, AND security of meaning?

    Did you mean that he should have described actors, but did not?

    To me at least, "ma" goes beyond just the actor model.

  • pakl 3 days ago

    Actors solves a very different problem. Alan Kay was talking about enabling computing across heterogeneous systems.

    • jayd16 3 days ago

      What about actors makes that impossible?

  • layer8 3 days ago

    Most of Alan Kay’s writings on that topic can be reduced to something along the lines of “I want things to be nice and problem-free. I have a vague feeling that there is a methodology called ‘OOP’ and ‘messaging’ that would achieve that. All systems that claim to be OOP that are not nice and problem-free are obviously missing the point.”

    • fidotron 3 days ago

      The problem with this is he led teams that built systems that proved his point.

      • layer8 3 days ago

        They built systems, but I disagree that those proved his point. It’s not even clear what precisely his point is and how you would evaluate success or failure for it.

        Regarding Smalltalk, there are conceptual reasons why it failed, some of which are mentioned in this thread: https://news.ycombinator.com/item?id=10071681

        • kragen 3 days ago

          it failed? today's most popular programming languages are about half smalltalk derivatives (python, js, java, c#, and vb.net, but not c++, c, golang, sql, and fortran), apple is the world's most important computer manufacturer, wimp guis still dominate on computers that have keyboards, every web browser includes an ide with an object inspector, and virtually all programming is done in ides

          that doesn't sound like failure to me

          • bitwize 3 days ago

            The royalties on the laser printer alone earned Xerox back PARC's entire expenses 200 times over but PaRc FaIlEd BeCaUsE xErOx DiDn'T kNoW hOw To mOnEtIzE tHeIr InVeNtIoNs.

        • igouy 3 days ago

          > some of which are mentioned in this thread

          Tell us which three you regard as most important; and which of the 61 comments in that thread demonstrate most clearly that they are important failings.

      • igouy 3 days ago

        For example ?

        (And which point was proved.)

pakl 3 days ago

At Alan Kay’s Viewpoints Research Institute, the problem was phrased in a more concrete form and a solution was provided — “Call by Meaning”[0].

The most succinct way I have found to state the problem is: “For example, getting the length of a string object varies significantly from one language to another... size(), count, strlen(), len(), .length, .length(), etc. How can one communicate with a computer -- or how can two computers communicate with each other -- at scale, without a common language?” [1]

The call-by-meaning solution is to refer to functions (processes, etc) not by their name, but by what they do. VPRI provided an example implementation in JavaScript[0]. I re-implemented this -- a bit more cleanly, IMHO -- in Objective C[1].

[0] http://www.vpri.org/pdf/tr2014003_callbymeaning.pdf

[1] https://github.com/plaurent/call-by-meaning-objc?tab=readme-...

  • toast0 3 days ago

    > The call-by-meaning solution is to refer to functions (processes, etc) not by their name, but by what they do.

    This seems like call by an even longer, more difficult to use name.

    And it would seem to rely on a common language to describe functions/methods, which clearly we don't have or everyone would use the same names for things that do the same thing already.

    • pakl 3 days ago

      Think about it. A “meaning” in this usage is definitely not a longer name.

      • toast0 3 days ago

        From the doc you linked we have

           var clock = K . find (
           "(and
            (variableRepresents that thing)
            (typeBehaviorCapable-DeviceUsed thing
            (MeasuringFn Time-Quantity)))")
        
        So if I want a clock instead of using the name system.timer, now I need to know the much longer name. Maaaybe you think I can reason about the parts of this name, but it's just a longer string with funny syntax. And it's only really useful if we all agree on the language of description, which if we had a common language of description, we wouldn't have the problem this is trying to address.

        If you've got an example of a real system using this where it's actually better than searching docs, or learning what the language of today uses to measure the size in bytes and the size in codepoints and the size in glyphs, please link to that. But this feels like yet another thing where if everyone agrees about the ontology, everything would be easier, but there's no way everyone would agree, and there's not even an example ontology.

        • TZubiri 3 days ago

          The different between a descriptor and a name is that there is one name, but infinite descriptors.

  • pilgrim0 3 days ago

    I find this super interesting! The first thing that comes to mind reading the demo code is, perhaps against the purpose, to canonicalize the lookup examples, which in turns evokes that the examples could be expressed by type expressions alone. Which makes me think of a type system that embeds a generalized set of algebraic operations, so that the adder function is one that simply returns the type Number + Number. Those could be semantic operations, beyond the basic mathematical ones, of course. Anyways, just thinking out loud.

  • gandalfgeek 3 days ago

    Thanks for the pointer!

    "Call by meaning" sounds exactly like LLMs with tool-calling. The LLM is the component that has "common-sense understanding" of which tool to invoke when, based purely on natural language understanding of each tool's description and signature.

bazoom42 3 days ago

He should just have called it microservices instead of objects.

  • thom 3 days ago

    Microservices today have all the same problems as OOP, but vastly amplified. My kingdom for some more functional approach to architecture, with services as more or less pure functional transforms, and some sort of extremely well-typed data mesh underneath.

    • Phiwise_ 3 days ago

      Smalltalk is a partially-functional language (first-class functions in 1976, inspired by lisp) and also got static typing extensions many years ago.

    • TZubiri 3 days ago

      The world runs on microservices, government, agencies, companies, departments, bodies, organs, cells.

      Sure, reality has problems, and no it won't be solved by trying to understand everything as functions. Welcome to the world

  • agumonkey 3 days ago

    The more I see how microservices evolve the more I think about J2EE remote objects / ejb. A little personal facepalm moment.

virtualbluesky 2 days ago

Another way to look at it is by analogy. You pick up a cup, the cup warms your hand uncomfortably, so you put it down.

You and the cup are objects, and physically send messages as you interact. That leads to changes in the physical world as each actor decides what to do with the incoming information, by physics or by conscious action.

So far so good. Except software is just information, and so the software version of that interaction includes the "person put hot cup down on table" event. That interests somebody, so they rapidly express their displeasure and rush to put a coaster underneath...

And that is valid a model of computing. Direct messaging between interacting objects, a stream of events of the produced changes, and actors that consume that stream for things and optionally chose to initiate a new interaction

raintrees 3 days ago

"The messaging IS the program"

(Apologies to Marshall McLuhan)

  • bitwize 3 days ago

    Kay cites McLuhan A LOT in his talks.

abdellah123 3 days ago

Oop is about modeling... Messaging is optional. See yhe beta programmimg language