Posts
DONE 2am paella
Introduction
Its 2am and your hungry, but you look in the noodle cupboard and you realise your out of instant ramen 😱.
Here is a quick way to eat some vegetables, prep some soul filling food and have nummies done in 20ish minutes (which is not bad for something from scratch). This is not a traditional recipe, It kinda is similar but its not authentic in any way, shape or form.
The things you will use
Ingredience
- rice, about a cup
- this is the base and pretty non negotiable, pasta may work in a pinch but it will throw off some ratios and be harder to manage.
- vegetables
-
this can be anything, except maybe lettuce, here is what I used.
- mushrooms of some kind.
- an onion, I use spring onions as it also work as a garnish.
- bell peppers
- and sweet potatoes but again anything will work.
- spices
- I used turmeric, chilli flakes garlic and garam marsala as that's what i had on hand but if you had some harisa powder it would work really well here. But play with the flavours. Its a blank canvas.
- protein
- now this is kinda optional as the things mentioned before would work wonders on its own but if you have chicken pieces or something similar tossing them in will not hurt. That being said if you do I would try and keep the skin (if there is any) above the water line to help browning later.
utensils
- chopping board and knife
- serving dish :: i.e plate
- oven safe pan :: this is because we will stick it under the broiler but if you don't have one then this dish will still be good without broiling the top.
the basic method
- preheat the broiler
- add some oil to a pan and start to fry your protein. this is to give it a head start and prevent some nasty smells (boiling unbrowned meat smells like corpses).
- start prepping your veg. you can do this before but this saves on a little time. Cut them into manageable pieces, they don't need to be clean or even as it will all be boiled till tender.
- add the veg into the pan and fry it all together.
- add in rice to toast as well as the spices before, fry till they are fragrant.
-
add water to the pan and boil till rice is soft, keep topping the water up as needed.
- While its boiling you can basically wash up the chopping board.
- once rice is cooked, turn off the heat and put the pan under the hot broiler, cook until the top is brown (this will happen quick so watch it closely).
- eat. Optionally garnish with something green, green bit on spring onions, coriander leaves go nuts!
Finishing thoughts
This is a fun little dish that acts as a blank canvas. chuck in whatever you have and let it simmer for a bit and your done!
TODO The privacy of public transport
We as a world need to pick up public transport. Not only are they usually leagues better for the environment compared to cars they are also more space efficient and lead to quieter, safer and nicer cities.
But this comes at the cost of a few things (some of which can be fixed with funding and time) but one of them seems almost un refutable. The idea of privacy on transport. Now while in a theoretical sense this is true you are essentially in a crowed of people any of which could approach you, in practice its one of the more private experiences I have had.
The Car
The car is meant to be the epitome of freedom and privacy, go wherever you want with no need to interact with people. But in this case there is the lack of something. Either you are the driver in which case you are not free to do things like work, read or sleep and in a sense the time is wasted driving. Or you are a passenger in which case you are also not free (unless you are being driven around which is out of the reach of 90% of the population). You are attached to the social graces that come with human interaction in a confined space or are in an even more mundane situation where you are the navigator or other auxiliary function within the journey.
In either sense your not free to use that time in some kind of productive manner and this will only get worse with more people in this confined space, I say this as someone who has had one to many packed road trips and dead butt cheeks.
Public Transport
With public transport I don't have this to worry about. When I am travelling alone (which is most of the time). I am free to do what I want. I usually read but am free to do some light work, free to rest to some degree as well. In reality I do not have to talk to people other than the bus driver or odd ticket collector. Other people are not a problem as they don't want to talk to me as much as I don't want to talk to them. This is much more private and much more free, especially with the ability to buy tickets on my phone the amount of interaction with other human life goes down to near zero. And the interaction that does happen is quick or pleasant enough that I enjoy it.
This being said this is not a blanket privacy, you can't break down on a train the same way you can in a car, nor could you use a train as a home when camping or when times are tough. Its not a private compartment but it is a nice social interaction bubble which in a lot of contexts is all people need.
Conclusion
Public Transport is similar to a cafe. When you sit down at a table even though you are in a crowd you are alone, and unless you are being loud or (for lack of a better term) weird then you are left alone for the most part. This privacy in public in a way is freeing and something that becomes a nice to use after a while.
TODO Pacific Rim, Mecha and Charm
Pacific Rim is one of my favorite movies, not because it says something deep about society, or the human soul, not because its even that well made. But because its a direct, witty, campy movie that opens up my inner 9 year old in more ways than one. Its a deeply fun and interesting movie that almost forces you back to being that child that loves the carnal pleasure of citys being destroyed by biggest baddest robot fighting the biggest baddest robots.
The babble
and boy does this movie have a lot. Its a massive word soup that is thrown at you hot and fast.
Solid iron hull, no alloys. Forty engine blocks per muscle strand. Hyper-torque driver for every limb and a new fluid synapse system.
This almost forces you to just let your mind run wild, there is nothing to compare it against so we are forced to run with the discriptors, we start going to bigger numbers cooler words and in essence, we dumb down our thought. This does not stop mind you. We are bombarded with it throughout the movie. It is never meant to be focused on, this is not a technical movie that wants to build a technical world. but its use makes us
The Mechs
Or Jaegers as they are known. meaning Hunter in German they are less
The Characters
The Charm
DONE Dr Strange: Movie of madness
I recently, like a lot of people, watched the new Dr Strange movie. I found it… underwhelming to say the least. Even though it looked great I left that movie hall feeling like I just watched an underused mess. It was almost not as fun as having to deal with the ire of my family, as I was the one who chose the movie…
BTW this is an adaptation from a long discord message I posted in the doom emacs discord server so… Hi Lejon! I guess.
Spoilers ahead, you have been warned
The movie
In a word it felt like an underused cobbled together mess. I watched it in 2D, this meant that I did not get the face punching 3d affects and most of the spectacle of the story was squarely on the story (though the movie still looked great). I think with better writing not only would the themes and concepts have felt more solid but we would have more use (and proper use) of characters.
The villain introduction relied on you watching Wanda Vision for the arc to make sense, otherwise if you are a casual, coming straight from endgame it felt very much out of left field.
Characters that were teased at in the first Dr Strange, principally Mordo, a character who was teased as the villain for the sequel was not even shown. We instead get a Mordo that is not the one that has been built up and is then thrown away after a mild fist fight with little resolution to what he means to Strange. This lack of continuity from its principal prequel felt like a pressure release valve going off. All bets are off and all tension is cut. This will not translate over to any third movie as it would have been too long. the first movie was released in 2016… 6 years ago. another movie will not be coming for a while and by that time at least my psyche would have moved on. Mordo and his build up would have been wasted.
The Illuminati (why the illuminati??) present in universe 838 were comprised of the first cannon introductions of both the Fantastic Four and the Xmen in the MCU (M?) both of which were over very quickly on screen, it felt like "oh this is a thing that will be coming in the later movies but we want to tease it now". Then you also have Peggy Carter as Captain Britain (or whatever), principally a call back to What If but if you did not watch that, it was a gag. A lesser example of the problem I had with Wanda's arc.
Themes from a box
Themes are presented such as happiness, motherhood, loss and confidence. None of these felt explored in any satisfactory way. Happiness was not really dealt with, Strange was just kinda asked "are you happy?" and lied through his teeth, this then came up near the end as well when evil Strange number 3 asked the same question and then they started fighting with music notes. It was kinda resolved with the monologue to 838 Christine but in a cheesy way, not in anything that felt good for the character. It felt like the cliche line and a cliche theme that did not do much for our character. I feel like the movie would not have been different if it was not included.
Motherhood was better in this regard. It gave Wanda's sacrifice weight in a sense and was played in little bits though out the movie, as she dreamed and dream walked. It added to her ending seeing her actions come to a head and see how she will never be a mother to any child she abducts. It was a good scene and a good theme.
Confidence on the other hand while being intertwined with happiness also takes a lot of traits from it (not really but the parallels are there). it did not feel dealt with in any real sense though out the movie and it just came to the head that was the power of friendship ending where the protagonist learns to believe in themselves and then girlbosses scarlet witch. It felt rushed with no real build up.
Multiverse of… not much?
The concept of multiverse was very much underused with us not getting a chance to really see it. This does not mean I want to see a massive amount of universes but want to see the concept. Otherwise it does not become a distinct thing, a concept the movie plays with, but a plot device that does nothing but give our characters reason to move (editor's note: a MacGuffin (Thanks Lejon)), as well as fuel this power of love story ending (I know its not actually a power of love ending, I am taking a little bit of piss) Some may say It was never explained in any real way which reflects the unknown nature of it to our protagonists, but even still with this it does not feel distinct in any meaningful way
What this says about the MCU
This speaks to a bigger problem I saw in endgame but think this movie exemplifies, The MCU has gotten too big to be cohesive. Most of the movie felt like callbacks, teasers and set up with actual substance being lost. It leads to a movie that felt hollow in many senses, gravity has been lost and points of interest have become little more than lore points for the overall arc in phase 5. As it grows if you want to stay in the loop and understand most movies fully, you need to watch everything that comes before, its getting to the point where there are entire sub markets writing up plot summaries so that you can understand the movie. This essentially excludes the casual from the franchises they enjoy, People may only tune in for the movies they care about (for me those being the spiderman and Dr Strange and maybe Thor) they are left out. Now I could go on about how this is all to drive up profit and coerce people into going to every movie and watching all the shows on Disney+ but that's for another day.
Part of the magic of the first set of movies was that it was a small rag tag team that had there own introductions, each movie added context but also did not become required reading to understand in full.
Endgame was the beginning of this. This movie is the beginning of the end.
DONE Moving to Wayland! Login shell lambasting
The problem
I have been trying to move to Wayland for the past year. The call of gestures,
less artifacting and just the hype had me spell bound. The problem was,
GNOME, my DE of choice, decided to make what I think is the asinine choice to not
start the DE in a login shell. All this meant was my .profile never runs and my nix
environment never get set up. This is a deal breaker for me because I have
programs I use every day (principally emacs) which I can now not access.
This is not a problem though! GNOME has thought of everything! you can now
declarativly declare all the environment variables you want with an
environment.d/*.conf file!.. Oh wait. I can't run shell scripts with that…
That's
the reason I could not use my nix programs, nix sets its environment using a set of external shell
scripts that can and do change as nix installs and removes packages. This is not
a problem for a login shell as it just runs them like any normal sourced file.
But you can't run scripts in this conf file meaning nix stays unusable.
What was my solution then?
Well my first port of call was of course to force GNOME to start a Wayland
session in a login shell. After all thats how other people get other Wayland
environments to respect there .profiles. Ez slap a -l in the exec call of
whatever program starts GNOME and we are golden right… Well no. While you can
wiggle GNOME into running a login shell, it seems its allergic to running in a
Wayland session. I am not sure of the black magic GNOME does to start its
Wayland session but its above my pay grade.
That being said I have tried most things from fiddling with the xsession file to
pass in a -l argument, to making my own slightly modified gnome-session
start up script. They either did not spawn a Wayland session, or did not load my
.profile (or in one entertaining case did not launch GNOME at all, I just had
a bare x display server). In any sense it did not work and it made me sad.
The actual solution
But thanks to Flat on the doom emacs discord server, for breaking me out of the rut I was in, and inspiration from the doom env command Instead of trying to force GNOME into the login shell, bring my login shell (more specifically my environment) to GNOME!
This is where I ask you to flash back to 20 seconds ago
where I mentioned the environment.d/*.conf files. Well all we are doing is setting
environment variables with our .profile, if we could capture all of the
environment variables my .profile sets and pipe that into a conf file We would
be done! In a nice list it would take three things:
- an empty environment to actually see exactly what is being set
- A command to run my .profile
- a command to print all the set environment variables
The first and last are actually handled by the env command! Just call it with
the -i flag and it starts with and empty environment! Then call it at the end
to get my list! Now to read my .profile.
Turns out we can just call sh with the -l flag to start a login shell, like I
have been wanting to do with GNOME! This leads to this very nice one liner which
I can then redirect into a .conf file like so.
env -i HOME=/home/jeet sh -l -c env > ~/.config/environment.d/profile.conf
I don't even have to do any parsing as it's already in the syntax the
environment.d expects!
And that was it! Just that one liner and a log out and I can finally use Wayland!
Its such a simple hack in retrospect. All I would need to do now is hook this
into running at the tail end of a nix update to recapture my environment and
this hack would be seemless!
Conclusion
The fact I have had to do this in the first place feels silly. I love GNOME and I can understand why the devs would want to move to a more intergrated system in a sense. Does not stop me from being mad I had to wait a year to be able to use Wayland full time. Or that I have had to spend so much time trying to figure out how to wiggle my not unpopular use case into something usable. In any case the fix is there, even if its not preferred, and I can move onto bigger things! This may be the beginning of a set of posts about Wayland and my adaptations to it so stay tuned!
And if you did manage to actually get a GNOME Wayland session to start in a login shell though please do reach out!
TODO Doom, Emacs and Communication
Recently Protesilaos, also known as Prot, wrote a blog post detailing how Dooms configuration of Git Gutter constituted a soft fork in so far it broke his modus themes. I will not detail what happened here as Prot does a fine job of that (and as it was addressed upstream). The problem with what happened here is that nowhere in this process the doom project was informed of this problem. In this case instead of notifying the doom project It was diagnosed and documented in the manual. The doom project finally came to know when said blog post was posted and steps were taken within a timely manner. With more steps being worked on to address the problems stated.
This then happens again in the release of modus themes 2.6.0 Where the theme drops support for solaire mode on the grounds that doom users opt into using the package without knowing, thus leading to the themes being sub par out of the box (again I recommend reading the release log linked above). This is not to suggest that themes need to support solaire mode (and solaire works in such a way to deactivate when a theme does not support it) but again neither the doom project or the maintainer of solaire (in this case the same person) got notified and again found out through this code log.
In a word this is not a good way to act. The doom project cannot stay on top of how every package in the emacs ecosystem will interact with doom and to ask of that is silly.
I can empathise with package maintainers, getting issues they can't diagnose because the problem is not with there package but with how that package interacts with doom. But the solution here is not to silently move on (only for it to resurface later on) but to talk to the project. If a problem is coming up then make an issue on the bug tracker, discourse or shoot a message to us in the discord . From there we can work to a solution that both parties can accept, In the former example it was a simple matter of gating the config. The latter could have either been solved on the solaire side or on the doom side (in this case its the same maintainer). The solutions are to be had if only the community talked with us about these issues.
Dooms relationship to the wider community
It makes sense to discuss how doom relates to the rest of the community as it's special in this regard. In a phrase doom is a middle man, taking packages and configuring them for end users. This means for us that we need to have relationships with both sides of this equation, and to some degree we do. We have package maintainers who discuss problems with us as we develop modules using there packages. We also have doom users who also maintain packages that then get put back into doom!.
Who the forums are for
In a word, Everyone. This is an area of active improvement for us as we introduce new constructs to make sure that maintainers can voice there things with us in a constructive manner. But this should not stop maintainers talking to us, if your package is interacting badly with doom, raise it on the discourse or github. If you want to discuss something indepth, join us on the discord (eventually there will also be a matrix room if thats more your style). The key here being the forums are for everyone and not just users.
DONE I finally understand monads and now I will write about it
CLOSED: [2022-11-23 Wed 05:53]
After a lot of struggle I finally understand monads and why they are useful. This is less an explainer and more of a write up of my understanding. In any case let us get started.
So what is a monad?
A monad is a datatype that can use >>=, You can call it bind or then with
the latter name leading into what it does.
Here is its type.
(>>=) :: m a -> (a -> m b) -> m bThis function takes in a context of m a, then a function which transforms that
inner value, returning that transformed value in the same context.
print $ Just 1 >>= return . (+1)
print $ Just 2 >>= return . (+1)Just 2 Just 3
This allows for many operations to be chained together, as the return value of the first becomes the input of the next.
print $ Just 1 >>= return . (+1) >>= return . (+1)Just 3
Do notation
This chaining of operations looks a lot like imperative programming. This is in
part why do notation exists. If we were to use IO (which is a value
contained in the context that it came from an input output system.)
This
print "Hello, what is your name?" >>= \_ -> getLine >>= \name -> print $ "Hello " ++ nameTurns into
main = do
print "Hello, what is your name?"
name <- getLine
print ("Hello " ++ name)Which should look pretty familiar to you. Here is what the python looks like
def main():
print("Hello, what is your name?")
name = input()
print("Hello " + name)Okay this is cool and all, but why do we need to implement functor and applicative??
Well when you look at what we are doing, >>= hides a lot from us.
When we have a look at what functor and applicative add to the
equation we can hopefully see why we need them as well.
Functors
A functor is a datatype where we can (f)map the inner value without losing the
outer context.
It gives us the <$> operator, otherwise know as fmap.
Its type is
(<$>) :: (a -> b) -> f a -> f b
This operation takes a function that transforms type a into type b, and then
a functor of type a, it transforms it into a functor of =type=b.
Simple enough.
One little side note, haskell is curried meaning that we can write
something like this (f <$>) Which returns a function that takes a functor of
type a.
If we say for demonstration that f is a function that takes an Int and
returns a String, our types would look like this.
f :: Int -> String
(f <$>) :: f Int -> f StringEssentially we have transformed our lowly f that can only work on simple types
into a function that works on functors. This is known as a lift operation.
This is important for later.
Applicatives
Applictives add a few more operations to the mix, notably pure and <*>
Here are the types
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f bPure is simple enough. It takes a value and "wraps" it into an applicative. This
raises a value and allows us to use it in the applicative space.
<*> takes a function wrapped in an applicative and compose it with another
applicative. If you compare its type to that of <$> we can see that they are
similar but <*> allows us to use a function in a context! this makes it a more
general version of functor.
Also note that
(f <$>) :: f Int -> f String
(pure f <*>) :: f Int -> StringWhy is this useful
Well these operations allow us to compose contexts together, something that was
not possible with just <$>
For example lets take (min <$>) as an example
min :: a -> a -> a
(min <$>) :: f a -> f (a -> a)Here we are using a function that takes two arguments rather than one and here
we can see our problem. We have a function wrapped in a context. If only there
was an operation that allowed us to compose contexts together.
As we can see the left hand side of this equation has the type of f (a -> a),
the right has the type of f a these, which then combine and come to the correct
result.
min <$> Just 1 <*> Just 2
This scales. Here is a function which takes in three arguments and adds them.
Here we lift f then apply one context. We get back a value which takes in
another context and returns a function within that same context 1 which we can continue to
chain with other values using <*>
f :: a -> a -> a -> a
f a b c = a + b + c
(f <$>) :: f a -> f (a -> a -> a)
(f <$> Just 1 <*>) :: f a -> f (a -> a)
(f <$> Just 1 <*> Just 1 <*>) :: f a -> f a
Bringing Binding this all together
So we have the ability to transform the inner value of a context, we have the
ability to compose two or more contexts together. The problem arises when we want to
compute the next context based on the result of the previous. Look again at the
type of <*>
(<*>) :: f (a -> b) -> f a -> f bwe know the end goal of this computation as all <*> is doing is satsfying the contexed
function. This limits us to computations where we can reason about the end
result. What about a computation where we can't, where we need to think about the
last computation before we move on. This is a power monads have.
Lets revist the type of >>=
(>>=) :: m a -> (a -> m b) -> m bThe first argument is a contexted value, You can reason about it like its some kind of computation. This computation is then "unwrapped" and passed into a function which crucially can decide what to do. We do not need to think about whatever end goal we want right at the beginning, we can go as the wind tells us, so to speak. This is useful in places we need to parse some kind of contextual information, for example a context filled language such as some markup languages, including the one I am currently writing this post in.
A monad in plain sight
So we have discussed what all of these things are but lets discuss a real world monad, One that you probably have already used. The Async Monad!
Yes if you have done Async programming then you have used a monad. Lets have a look at an example.
fetch(`http://localhost:8080/some-data`).then(response => {
if (response.ok) {
response.text().then(text => JSON.parse(text))
}
})Here we receive a promised response from fetch. We then unwrap its inner value and get our response object. After playing with it, we extract out the text (which is a Promised string) and parse it into a json object. This entire expression returns a Promised JSON object.
In this case we take a context, unwrap it, then return back the same context with a transformed value.
We decide as we go, Our next computation is dependent on the value of the last.
Note how async await is basically do notation in this case
const getData = async (idx) => {
let response = await fetch('http://localhost:8080/some-data');
if (response.ok) {
let text = await response.text();
return JSON.parse(text);
} else {
throw new Error("An error has occured")
}
};
async = do
await = <-
Why did I write this?
This is an explainer I have done, less because I want to try and be the one to tackle the monad fallacy but because its fun and a good way to help me solidify what I know. Plus it may start to help build intuitions on these types. Though it must be said
There is no royal road to Haskell. —Euclid
The best way to learn is to get your hands on them and play with them. No amount of theory will do you any good unless you put these ideas into practice. Once you do you start to see the patterns and then you can really get into the meat of them and become an epik haskeller. Some of the resources I really like include The Typeclassopedia, This video on the IO monad, this video implementing a json parser in haskell and this course from the University of Pennsylvania. Though it did not really begin to click until I started playing with Async in Dart.
Hopefully this is helpful and or interesting. If I have made a mistake or you want to discuss this my email is here!
Footnotes
TODO Web scripting with ruby
TODO Recreating the JS object system in ruby
I had a funky idea, why not try and re create the js object system in ruby? Why? well because we can. This idea dawned on me when I realised I can add the functionality of property like accesses to hash values using method missing.
class Hash
def method_missing(prop, *args, &block)
self[prop]
end
end
hash = {
hello: "hello is not a method 😱",
}
puts hash.hello #=> "hello is not a method 😱"Ignoring the method definition this looks a lot like javascript. and now I want to see how far we can take it.
Some expectations
Now this will not lead to a full look alike of Javascripts object system. we can get close but we are still limited by rubys syntax. In any case I think we can create something that works a lot like the function and learn something along the way!
Why javascripts object system is special
Lets take a minute to discuss javascripts object system. JS is interesting because you do not need to go through classes to make objects.
Properties
Properties are our object attributes, they are our values, they can be read and written too
obj = {
first: "Joe",
last: "mama"
}
console.log(obj.first) // => Joe
obj.last = "Son"
console.log(obj.last) // => Son
We can already get our properties, but we need to be able to set them.
Now in a pitiful language we would be stumped but not in ruby. Here setting
attributes is also a method that can be caught with method_missing!
class Hash
def method_missing(prop, *args)
puts prop
end
end
hash = {hello: "hi"}
hash.hello = "greetings" # => :hello=as you can see, its just our method name with an equals sign appended too it. check for that and we can set the property in question
class Hash
def method_missing(prop, *args, &block)
if prop.end_with? '='
self[prop.to_s.delete_suffix('=').to_sym] = args.first
else
self[prop]
end
end
end
hash = {hello: "Hi"}
puts hash.hello # => "Hi"
hash.hello = "greetings"
puts hash.hello # => "greetings"and just like that we can now get and set properties.
Methods
Methods are a little more interesting, Methods are properties that are
functions. The way they access the object is through the use of the this keyword.
obj = {
first: "Joe",
last: "Mama",
full () {
return `${this.first} ${this.last}`
}
}
console.log(obj.full()) // => Joe Mama
Now this is a trivial case but methods can do all sorts of things, not only access our properties but set them with arguments taken in from the caller. All of this hinges on accessing the special variable…
This
this in js is an implicit and usually hidden arguments to all functions (except
arrow functions). It contains a reference to the object we are working on, You
can think of it like self in languages such as python and ruby.
this can be passed in explicitly by using the .call method on the function like
so. In fact the obj.method() is just syntax sugar for the .call method.
obj.full() == obj.full.call(obj) // => true
This is visually similar to python. The only difference being that the method definitions need to take in an implicit self argument as there first positional argument.
class A():
def method(self, *args): # explit self argument
return args
A().method(1,2,3) # implict self passed in when called.We can actually implement the python style of "this passing" relatively simply, using lambdas and currying.
class Hash
def method_missing(prop, *args)
if prop.end_with?("=") # check if its a set
self[prop.to_s.delete_suffix('=').to_sym] = args.first
elsif (accessed_prop = self[prop]).instance_of? Proc
# curry the method and then call it with self.
# This returns another method which can take the rest of the arguments
accessed_prop.curry.call(self)
else
accessed_prop
end
end
end
hash = { hello: 'Hi',
greet: ->(this, name, l_name) { puts "#{this.hello}, #{name}, #{l_name}" } }
hash.hello = 'greetings'
puts hash.greet.('Joe', 'Mama') # => "greetings, Joe Mama"Getters and Setters
Prototypes
DONE The Reader Applicative and abstraction
CLOSED: [2023-04-10 Mon 02:43]
Now this is not a haskell blog site but this is the second interesting thing haskell has offered me.
Today we are discussing the curious nature of the Reader monad (well the Reader applicative functor as I don't plan on delving into the monad aspects a terrible amount)
To do this we will be discussing this pairs function.
pairs :: [a] -> [(a, a)]
pairs = zip <*> tailOn the surface its all weird and magical, but we will walk through the types and the implementations so that we can maybe pick up an intuition on how this works in general.
Now this function takes in a list and constructs a list of pairs, where the second slot is the item over in the list from the first slot. We can define it like this.
pairs lst = zip lst (tail lst)
print $ [1..5]
print $ pairs [1..5][1,2,3,4,5] [(1,2),(2,3),(3,4),(4,5)]
Now the question becomes, how does the first become the second using the Reader applicative? How does the type work out in such a neat fashion? How does this really abstract thing turn into something so concrete and useful? Well fear not dear reader we will answer these questions in due course.
How do these types work out?
Lets start off with the types
(<*>) :: Applicative m => m (a -> b) -> m a -> m bThis is the general type of the ap operator but in this case we are working with
the Reader applicative. In that case we need to see what it looks like when we
collapse the constraint.
(<*>) :: (r -> (a -> b)) -- (1)
-> (r -> a) -- (2)
-> (r -> b) -- (3)To anyone who has worked with haskell a little bit, this should be readable.
- is a function that takes in a value
rand returns a function fromatob - is a function from
rtoa - is a function from
rtob. This is our return value.
where r a and b are type variables that will collapse as we apply arguments.
Note how our context is this (r -> ...) function. This means ours functions have
to take in the same first argument. You can intuit this as an "environment"
these functions take in, though we will discuss the uses of the Reader monad in
a bit.
We can actually clean this up a little bit, the -> operator is right associative
meaning a -> b -> c -> d is the same as a -> (b -> (c -> d)).
With this knowledge in hand our type before turns into.
(<*>) :: (r -> a -> b)
-> (r -> a)
-> r
-> b
Here we can see something, our first argument is a function from r to a to b,
our, second argument is a function from r to a, This suggests we will combine
these functions so that the second argument to the first function is the result
of the second function (wordy I known). We also see how the return type b in the
first function is also the return type of the ap operator itself. This type is
pretty good at hinting both what this function takes in, and also how its
combining our arguments under the hood.
Now lets have a look at the types of zip and tail
zip :: [a'] -> [b'] -> [(a', b')]
tail :: [a'] -> [a']
We can see both of these functions take in an [a'] and then do something with
that. In other words our [a'] becomes our r. We can continue this process of
subbing types into our ap operator.
zip :: [a'] -> [b'] -> [(a', b')]
thus
r :: [a']
a :: [b']
b :: [(a', b')]When we fill in our type with this information we can see our type popping out.
(zip <*>) :: ([a'] -> [b']) -> [a'] -> [(a', b')]
adding tail into the mix constrains the type of b' even further
tail :: [a'] -> [a']
thus
b' :: a'applying this gives us our final type.
(zip <*> tail) :: [a'] -> [(a', a')]Congrats, we have now manually done the job of the haskell type checker. Hopefully now we now see how just by following the types and using abstractions we have come back to the type of thing we want to do. This is nice and all but what about the actual implementation? the type is useless if it does not follow our logic.
Why does the implementation work out?
the implementation of our ap operator for our Reader Applicative is as follows
(<*>) :: (r -> a -> b) -> (r -> a) -> r -> b
(<*>) f g r = f r (g r)If we sub in our functions, we see our implementation pop out.
(<*>) zip tail lst :: [(a, a)]
(<*>) zip tail lst = zip lst (tail lst)
This leads us back to pairs = zip <*> tail, which becomes our final implementation.
So now why does the reader monad exist?
Before we delve into that, we need discuss why we use applicatives and monads. This was discussed in more detail in my understanding monads post but here is a smaller run down.
An applicative functor allows us to compose contexts together into larger ones, like we have seen. It allows for a lot of very interesting abstractions such as parser combinators 1 as well as many other use cases (note that all monads you have played with also are applicatives). We see here how we have taken two functions that take in the same first argument and use the reader applicative to combine them into something larger. This scales.
zip3 :: [a] -> [b] -> [c] -> [(a, b, c)]
(zip3 <*>) :: ([a] -> [b]) -> [a] -> [c] -> [(a, b, c)]
(zip3 <*> map show) :: Show a => [a] -> [c] -> [(a, String, c)]
(zip3 <*> map show <*> map even) :: (Show a, Integral a) => [a] -> [(a, String, Bool)]
Here we essentially collect transformations of a list of type [a] Each function
on the left hand side receives this [a] but its the responsibility of the left
most function to collect it all together. This is a small contrived example, yet
the rules here would apply to any set of functions that take in the
same first argument.
Here we have a type with three parameters, we have functions that extract out the information from a single string.
data Person = Person {name :: String, age :: Int, job :: String}
constructType :: String -> Person
constructType str = Person
(extractName str)
(extractAge str)
(extractJob str)but now instead of passing in str manually we can use this Reader applicative to pass this "environment" implicitly.
constructType :: String -> Person
constructType = Person <$> extractName <*> extractAge <*> extractJob
Again here follow the types. <$> is fmap, it lifts Person from a simple function
to a function that works with our Reader applicative.
(Person <$>) :: r -> String -> r -> (Int -> String -> Person)
We can then keep on adding functions with the use of our <*> operator like so
(Person <$>) :: r -> String -> r -> (Int -> String -> Person)
(Person <$> extractName <*>) :: r -> Int -> (String -> Person)
(Person <$> extractName <*> extractAge) :: r -> String -> Person
(Person <$> extractName <*> extractAge <*> extractJob) :: r -> PersonWe take this further with monads, where we can use the latter computation to inform the next. It allows us to combine these computations together using context.
Its why the IO monad works so nicely. With the Reader monad it allows us to compose together computations which all need some kind of shared read only state. Useful when passing around things like app configurations (Values such as database configuration or network settings that only become known at deploy time), or something like react props
This post only really focused on the Reader applicative, If you want to see how the reader monad have a look at this post from dollar shave club.
The neatness of abstraction.
We have now used abstract tools to solve our concrete problems, Why is this neat? Well now that we have expressed our solution in terms of this abstraction, we can use all of the tools and types of this abstraction to aid us further.
take for example the function sequenceA
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)here we can see it essentially can turn a type inside out, Now this may not seem useful now but imagine what it would look like if we collapse the constraints.
sequenceA :: [r -> a] -> r -> [a]
Here we have a function that takes in a list of functions from r to a and then
it returns a function from r to [a]
In other words, we can perform a set of transformations on a single value.
sequenceA [(+1), (+2), (+3)] 1| 2 | 3 | 4 |
This may seem contrived but you can imagine use cases. We need to pass a user given value through a gauntlet of checks. or we take in a value and need multipe permutations of it and so on. I am sure that people are more creative than me.
Just by re-framing our problem using this abstraction, we have turned something pretty manual and "low level" into something smaller, easier to extend and nicer, and thats pretty neat.
Conclusion
Hopefully now you have a small intuition on the Reader Applicitive, The Reader Monad is another beast but now you have the basics of the type out of the way you can pick up that a with a little less head scratching.
Again this was not written to be useful but if you did find it useful feel free to email me, (its somewhere on this site).
Appendix
So there is actually another version of the ap operator that is implemented in
terms of the Reader monad
pairs = ap zip tailThis is a historical artifact as Monads are older than Applicatives, but it means we now have another way of framing the problem. As the type is essentially the same (Just constrained to Monads) all of the type work we did still applies but the implementation and how we get back to our first solution is interesting.
the implementation of ap is as follows
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }as do notation is syntax sugar for >>= lets get rid of it
ap m1 m2 = m1 >>= (\x1 -> m2 >>= (\x2 -> return (x1 x2)))
The implementation of >>= and return are as follows
(>>=) :: (r -> a) -> (a -> r -> b) -> r -> b
f >>= k = \r -> k (f r) r
return :: a -> r -> a
return = constWith this we can start to sub
-- return = const
ap zip tail = zip >>= (\x1 -> tail >>= (\x2 -> const (x1 x2)))
-- sub inner >>=
ap zip tail = zip >>= (\x1 -> (\r2 -> (\x2 -> const (x1 x2)) (tail r2) r2)
-- sub outer >>=
ap zip tail = (\r1 -> (\x1 -> (\r2 -> (\x2 -> const (x1 x2)) (tail r2) r2)) (zip r1) r1)
-- move r1 to the left hand side
ap zip tail r1 = (\x1 -> (\r2 -> (\x2 -> const (x1 x2)) (tail r2) r2)) (zip r1) r1
-- replace x1 with (zip r1)
ap zip tail r1 = (\r2 -> (\x2 -> const ((zip r1) x2)) (tail r2) r2) r1
-- replace x2 with (tail r2)
ap zip tail r1 = (\r2 -> const ((zip r1) (tail r2)) r2) r1
-- replace r2 with r1
ap zip tail r1 = const ((zip r1) (tail r1)) r1
-- const x = (\y -> x)
ap zip tail r1 = (\c1 -> ((zip r1) (tail r1))) r1
-- replace c1 with r1
ap zip tail r1 = ((zip r1) (tail r1))
-- clean up
ap zip tail r = (zip r) (tail r)Easy to read, I know. This took me a while to work out but playing with it helped quite a bit.
Footnotes
TODO Ox Hugo and brain computer interfaces
Originally this post was meant to be a review of ox hugo but in writing that review I realised that it has instead became a review of org mode. As I am in the habit of going on long tangents I wanted to discuss why org mode was such a good fit for writing prose (at least for me) and what actually constitutes a good brain computer interface. Do take what I say with a grain of salt as I am not a person trained in user experience or anything really. I am just a person with too much time on there hand.
What do I mean by the phrase brain computer interface?
I use this phrase to highlight what programs are. They are an interface that turns thoughts in our brains into words (both in the textual and computing sense) on a computer. Doing so effectively differs from person to person with many settling for interfaces that, while allowing them to produce the material they want, are slow and cause the user to fight the interface instead of allowing them get there thoughts to flow.
Getting our thoughts out of our heads.
Before the computer people still wanted to get thoughts out of there head and onto something that does not forget as easily. Most of us would think about paper and pen in this scenario but it goes back further. People carved graphems into stone, writing on palm leaves with ash based inks to animal skins and flattened reed stems 1. With all of these systems there is no organisation imposed by us from the medium itself (most of the time). We are free to add in things like drawings or change the orientation and style of text with different strokes of the pen. We can empty the stream of thoughts as fast as our hand can move. This free form writing system is freeing for prose as now we are able to include things like diagrams and pictures close to our minds imagination of them, allowing us to enunciate concepts in ways that are much more intuitive. This is a boon for writing and drafting as adding these little pictograms can usually be a better translation of our thoughts compared to the raw text.
The problems arise when we want to edit our work. When a scratch of pen has been planted on paper (try saying that five times fast) it becomes permanent. Even if you were to use pencil and an eraser marks still remain and the paper is never the same. Editing a work becomes a pain and means that revisions need to be written out instead of editing in place.
Editing our thoughts after they are out.
This is the advantage of computer systems. As these scratches are never permanent someone can edit the work in place and rehash parts without having to rewrite the whole. It saves time and energy. The problem is with the ability to edit documents quickly you now lose the free form structure of our documents before. Computers inherently impose structure as now they need to translate our inputs into a concrete set of instructions. This is not inherently bad, structure can be helpful. The problem becomes when the structure does not actively fit the ways we think
Imposed structure.
All computer writing tools have some structure to them. Markdown breaks things up into sections and paragraphs, HTML puts everything into nested elements which each carry there own attributes and semantics. LaTeX puts everything into environments, separating text with sections and chapter headings. These are pretty strict in that sense.
Word and other WSYWYG editors on the surface do not impose these kinds of structure. You are free to write and include pictures anywhere you want. But this is at the price of not feeling like paper. To do things like bold text or edit how something looks you must use the mouse, in the case of bolding text highlighting the text you wish to then use the interface or the keybind. This in a word is not like paper. Its extra steps to do something simple. In a word its imposing structure not in the text but in how we write it.
This too me not a good compromise. Instead of imposing structure in what we are writing we are changing how we write. If I were to compare this to paper its like having to change tools to change the text on a page. This is fine if not desirable when you want to fine tune how a document works and looks. But for writing? For getting ideas out of our heads? Lets just say I do not bring an art pallet to my lectures…
Imitating paper
All of this to say the brain computer interface should pull the best concepts from pen on paper to become the best prose writing experience we can have. So what do I like about pen on paper. Well firstly I like that its all just one tool. the pen is an extension of me. If I want to change something I change the way I type. Paper is simple, It only has one mode of interaction. the Pen hitting the page. Paper enforces little structure. other than making sure the page does not rip I can do what I want with the format Paper is easy to get into… It may be silly to say but its important for the user experience.
For me the interaction model is key.
Footnotes
TODO The Avatar Live Action Pissed Me Off
I recently finished the new live action show on netflix. You can re read the title to see what i thought of it. It pissed me off for many reasons and I can only think of one scene which was done better than the original. I always kinda knew it would fail to live up to my expectations because reshashing the same story twice almost never leads to a good product, but even still this one just missed so many of the marks that it left me feeling genuinely horrible after watching it.
They don't understand the source material
Straight up they don't. In a lot of cases they make mechanical mistakes in the writing. Foisting expositions instead of showing us through the action of the show. Removing struggles from the shows and giving our characters no space for growth.
They don't understand how to interpret source material
I think this is more fundamental to what actually makes the show hard to watch. I could care less about them not including my favorite scenes or that they talked instead of performing my favorite actions. But the fact that in every episode they missed the mark feeds into how I think they read the show. Their analysis is at best skin deep, looking at each episode, each story as a set of static actions and words, Instead of understanding what each episode says.
Each episode is not just the set of actions our gang of characters get into, but each action and word represents something else and feeds into the themes, messaging and narrative of the show. Each character is not some thinking feeling being but a representation of certain themes and emotions which will move fluidly with the needs of the story. This kind of reading between the lines is the basis of a lot of literary analysis and without this reading between the lines most of the meaning of the show, and what makes these shows special. The writers miss this in a lot of ways combining and contrasting elements that may work in time and space but not thematically.
An example of this that really sticks out to me is the use of Kyoshi as Aangs first interaction with the avatar cycle. This makes some sense as Kyoshi Island is one of the first stops in the show and there is a temple to her there. But by introducing Aang to these concepts through Kyoshi suddenly introduces the avatar state with no context and no understanding of what it actually means now why the world is the way it is.
The one scene that is good
On the aesthetics.
Final thoughts
DONE My Thoughts on the first sub-book of Dune
CLOSED: [2024-03-26 Tue 00:54]
I have been reading Dune and finished the first sub-book, its a wonderful book that has been somewhat spoiled for me in certain ways. Even still its a thoroughly interesting book. As a way to save space instead of typing up my thoughts directly into a discord post I am going to type it into a blog post. This will be shorter, less structured and somewhat akin to a conversation I am having by myself. So yeah.
SPOILERS AHEAD YOU HAVE BEEN WARNED
The structure and clarifying what terms mean
Dune is large. Through the many books, sequels, adaptations, and movies. When you say the word "Dune" you in turn refer to every single entity and sub entity of this franchise. When I refer to the book Dune, I am referring to the first book in the series. When I am referring to the sub-books of Dune, I am referring to the individual partitions within the book Dune. For me that is broken into "Book 1: Dune", "Book 2: Muad'Dib", "Book 3: The Prophet". Idk if this is the same or different in different publications but I will continue to refer to these as the sub-books. I have finished the first sub-book and this is the basis of most of what I am writing about
The first book of Dune
I don't know what it is about this book but Dune has gripped me and has not let go. This is my second time starting Dune (I will get onto why later) but in about a month I have powered through quite a bit of this book. In a lot of ways its dense with meaning, meaning that does not always reveal itself until later. Its descriptions are deep but do not drag. Its world is fleshed out but only as much as needed (witty comparison to a Fremen or something). In other words it's well written.
But to begin with, it is also slow. Within the first 250 pages not a lot of action happens and this world is just being set up. Questions being asked and the future being foreshadowed. But this is not to say it never picks up. Our patience is rewarded as we finish this sub-book where all of this set up pays off with a bang. This slowness does not detract however. The way Frank Herbert uses words is able to convey such rich emotion, that I felt like I was riding in that Thopter alongside Duke Leto, I could feel the weight that Barron Harkonen's suspensors have to deal with, and feel actual disgust in his gluttony. It requires underlining how well this book is written.
On spoilers
Before I get into the actual content of this book, Some words on my experiences need to be mentioned as it will poison my thinkings.
I wish I could go into this book blind, but as I am writing this (March 2024) the world is picking up Dune en mass and talking about it. So through osmosis and my own stupidity I am learning things about the book I should not know. I am now going to figure out how to spoiler text in a blog post but I will list some of the small things I have seen. Also curse you Instagram reels.
If you have not finished the first two books of the Dune series. Then do not open any of these. Don't open the last one unless you have finished the 6 books (I have no idea where it comes from)
Spoiler 1
Apparently Paul Atredies is going to commit an intergalactic genocide? and will begin to lose our (the readers) support? This is sad for me as I will now be asking the question. When will Paul turn? and strings I would not connect to a genocide are now being connected. At this point in the book Paul is presented as a character we should be supporting. I guess its about the journey but still I wish that was a twist I would have discovered by myself. :(
Funnily enough the presentation of this fact have been about the movies (as far as I am aware) and a lack of media literacy in people who think Paul is presented as this hero and all actions he does is by extension of that fact, justified.
Spoiler 2
The Sand worms are the source of spice. This is a detail I probably already knew as the character of Kynes hinted that the sand worms are a critical part of the spice systems on Arakkis but even still I hate how I did not get to discover this myself.
Spoiler 3
Apparently Paul Merges with a Sand worm????
I am not sure of the context of this but this is one of those spoilers I can't forget and will bite me as I read more.
This is only a spoiler if you are like me and don't want any thoughts about the book to taint your experiences.
Thematic Spoiler
The Fremen are this Islamic / Arab coded society and this entire book is a metaphor for Colonialism, Interventionism and US imperialism in West Asia.
This is quite interesting to me as for the time being the book has not shown the natives in as much detail as I would have liked nor fleshed out the relationship between the settler population and the Fremen other than a few small scenes. It also means I get to see what the book says as I can't figure out where these metaphors connect to the real world without more info.
This is a point I will revisit as I read more into the book, one I kinda new existed but would have rather derived myself almost.
In this sense these spoilers don't stop me from reading but still do spoil the experience. I have had a few more paper cuts probably but ill add them as I remember
On this world and the characters of Dune
Within this world we seem to have some kind of overarching flavor of destiny. Destiny for the desert plant of Arakis, destiny for each character that attaches itself to this planet.
For Leto it is death. "There is nothing for him" the Reverend Mother says. For Paul it is his roles as messiah. For the planet, illusions of a stable and prolific water cycle. Nothing in this world seems to come completely out of left field. There is always some kind of insight, some kind of foreshadowing for these events. In this sense every word is something to hang onto as if it leads to some kind of future event. This is not only something for the reader because it has an effect on our characters. We see them refer back to these prophecies as we continue through and work with the conclusions they come too. We see how the world shapes these characters as they pick up its mannerisms and movements.
It brings me onto the details of this world. It is not only painted with large vistas but also with small details. One scene that sticks with me is when a Fremen comes into the Arakeen great hall and spits on the table. Everyone was ready to attack this person for disrespecting the Duke but in turn the person sent ahead stops them and thanks them for the gesture of sharing his moisture with the Duke, a great sign of respect on a planet where water is so very scarce. Through little devices like this an entire canvas is painted. A culture fleshed out and reinforcements of core ideas are shared.
On the language of Dune
I have heard from others that the language in this book required a glossary? To me this feels odd as the book has not failed to cover its tracks.
Take for example the term "Mentat". This is referred to multiple times to describe our characters, some of them belong to this class and they seem to be made, but what they are is never fully explained until Paul goes through this transformation near the end of this sub-book. Through his experiences of this transformation, we understand what makes a Mentat different from a person. This applies to a couple more ideas and phrases. "Lisan Al-Gaib" is mentioned as some kind of messiah figure but then Kynes actually goes into detail about it as he sees Paul act like this messiah.
Muad'Dib is mentioned, again and again at the beginning of each chapter. Usually in the header, from excerpts of his selected writings or children's guides or other works by the Princess Irulan. But it is only until we reach the end of this chapter that we see how its actually Paul who is this Muad'Dib and what it means. He is not only this messiah "Lisan Al-Gaib" but also this leader.
This is not to say that if you were to use a glossary that the reading would be spoiled but to say that the language is not there by accident and will resolve itself as we move through this story.
What I have learnt about myself while reading Dune
I don't really have more to say about Dune the book. But while I have been reading this I have learnt a few things about myself.
I can finish big books. With a little time before bed each day I can get through large sections of a book. and I can do it in reasonable times. But in this case I am helped by a few things
Good typesetting is a must. If a book has not been well typeset then its harder to get through and I struggle to finish it. This is why I put dune down the first time. The (digital) copy I had was hard to read and hard to parse. Not in the sense the words were hard to read, but in the sense that different things should look different. In this case what added to my confusion is that chapters were not really separated and the chapter introductions by the Princess Irulan were really not separated from the main body of text. All of this adds up to a not great reading experience
I need smaller chapters I can see the progress of. I am a person, who partly because of internet brain rot, needs smaller chapters I can see my progress in. 50 pages in a chapter is too much for me to read comfortably without some kind of break. There are books where chapters represent hundreds of pages and you need to kinda just pick a point to put it down. This is not to say I can't read these books but like small chapters make it easy. This also feeds into typesetting as my e-reader can track how far I am into a chapter. I can see I am nearly done pretty quickly as a percentage and I can call it quits or keep going pretty neatly.
Conclusions
Dune is a good book, Ill come back when I finish sub-book 2. All of this reads as almost a superficial understanding of the book of Dune. I think that's okay for the moment though as I have not gotten that deep into the book. Lets see how I feel after finishing sub-book 2.
Also thanks to Leo from A blog with relevant information for proof reading my work and suggesting some improvements. Hopefully next time the patch is a little smaller!
COMMENT Local Variables
In fact our writing systems are influenced by the medium we used. Tamil script became a lot more curly because of the use of said Palm leaves. Hard horizontal lines can cause tears in the leaves used causing them to decay faster. So the language evolved more curls and less dots up until printing became more common.