Man, I feel like APL has unlocked some latent part of my brain.
I'm a few years into seriously using APL and now work in it professionally doing greenfield development work.
Starting out, solving puzzles and stuff was fun, but trying to write real programs, I hit a huge wall. It took concerted effort, but learning to think with data-first design patterns and laser focusing on human needs broke through that barrier for me.
Writing APL that feels good and is maintainable ends up violating all kinds of cached wisdom amongst developers, so it's really hard to communicate just how brutally simple things can be and how freeing that is.
i worked in APL2 fulltime years ago, big asset backed bond models, big as in some of the largest workspaces the IBM support people had ever seen. Never occurred to me to pick it up again, but i have been looking for the Polivka/Pakin book i learned out of (the edition prior to their APL2 edition).
And on sci-hub, it is really unfortunate that IEEE hasn't followed the ACM and removed their paywall for ancient articles. Esp since ostensibly, IEEE isn't a forprofit entity, these old articles have zero monetary value.
you can't imagine something being relevant because the AI doesn't know about it? Seems like more a fault of the AI if you ask me. There is a huge amount of information that hasn't been—or cannot—be captured in the data LLMs are trained on.
Man, I feel like APL has unlocked some latent part of my brain.
I'm a few years into seriously using APL and now work in it professionally doing greenfield development work.
Starting out, solving puzzles and stuff was fun, but trying to write real programs, I hit a huge wall. It took concerted effort, but learning to think with data-first design patterns and laser focusing on human needs broke through that barrier for me.
Writing APL that feels good and is maintainable ends up violating all kinds of cached wisdom amongst developers, so it's really hard to communicate just how brutally simple things can be and how freeing that is.
Interesting, how did you choose APL?
i worked in APL2 fulltime years ago, big asset backed bond models, big as in some of the largest workspaces the IBM support people had ever seen. Never occurred to me to pick it up again, but i have been looking for the Polivka/Pakin book i learned out of (the edition prior to their APL2 edition).
Could you give some examples of where you're using it?
I'm thinking I'd like to learn array languages (APL, J) and maybe use them professionally. Maybe their time has come.
Probably, especially given the boom of GPU/Tensor computing.
You might find Stefan Kruger's book useful: https://xpqz.github.io/learnapl/intro.html or his write up of the APL Cultivations (https://xpqz.github.io/cultivations/Intro.html)
Not sure where best to start with J, although finding it interesting reading through the Dictionary (https://www.jsoftware.com/help/dictionary/contents.htm) and seeing how it compares to APL
https://www.semanticscholar.org/paper/System-Design-of-a-Cel...
And on sci-hub, it is really unfortunate that IEEE hasn't followed the ACM and removed their paywall for ancient articles. Esp since ostensibly, IEEE isn't a forprofit entity, these old articles have zero monetary value.
Missing the tag (1970), and the paper text.
It's one of those broken sites where you can't even access the text. And I am signed in, it just doesn't load the pdf.
How does this compare to a modern GPU?
Reading the abstract, it seems like a precursor of somekind
Cant access the text but "sounds" very advanced for 1970. Gemini 2.5 did not give me anything much about it so a little perplexed about its relevance.
you can't imagine something being relevant because the AI doesn't know about it? Seems like more a fault of the AI if you ask me. There is a huge amount of information that hasn't been—or cannot—be captured in the data LLMs are trained on.