Virt-Manager running Windows 10

My employer has provided me with a lovely new laptop, a completely blank slate, ready for me to install an OS and setup. I have gone with Fedora 38 Beta because of the maturity of Fedora in general, and a general hedonistic desire for pain.

Some of my work is on Windows 10, for which I have a virtual machine that I copied over from the old laptop. Here’s where my pain begins.

Permissions – Selinux as setup on Fedora won’t let KVM touch the hard disk image I have copied over. Try “chcon -t virt_content_t -R PATH/TO/VMs”.

You want multiple VMs to talk to each other without the IP address changing all the time, and they must be able to access the internet no matter what network you are actually connected to – Use the bridge. It’s probably already setup as “virbr0”. That’s all, just use bridged networking. Each machine will be allocated an IP, on next startup, that will remain with it for a very long time.

Display and fractional scaling issues – Set “Scale Display” to “Never” and enable “Auto resize VM with window”. Set video to QXL and then edit the XML to double each of the RAM values. Make sure you have added a spice channel, target name “com.redhat.spice.0”. Install spice tools on the guest machine. Set DPI to 200% in the guest. Whatever fractional scaling you use on your Linux desktop, the guest will be appropriately scaled.

Sleep mode doesn’t work or isn’t present – This is a pain, because suspend to RAM basically doesn’t work, it crashes on resume. Suspend to disk on the other hand does work, with coaxing, and there’s no button in the UI to initiate it. Edit the XML and ensure “suspend-to-disk” is enabled. Install the virtio client tools in the guest, they can be downloaded from the Proxmox web site. Add a new spice channel for the QEMU guest agent. Restart the guest. You can now hibernate the VM from the host with this command line “virsh -c qemu:///system dompmsuspend win10dev disk”.

Naturally much of this will work differently, or not at all for you, but here it is as future notes for me, and hopefully the start of finding your solutions.

We don’t use Linux because it’s easy.

Advertisement

YAFL progress

Parallel execution done easily and cheaply is utmost on the list of goals for YAFL. The model I am aiming for uses tuple construction as the implicit place to insert fork/joins to have parallel execution of sub-expressions, where the compiler has determined them to be expensive enough to benefit. The fork/join logic itself is designed to be very cheap. Initially I was aiming for a continuation passing style for this, until I realised just how expensive and pervasive it would be. Instead I have gone down the fibers route, with a custom fiber (and yes, I am using the American spelling, deal with it) library designed to have very low cost and low memory footprint. This is all inspired by the approach taken by the Go programming language, with its segmented stacks.

A fiber library will in general launch a small number of OS threads on startup equivalent to the number of CPU threads available. Fibers are like old school multitasking where each thread of execution has to yield control. The stacks tend to be very small, and it is usually tightly coupled with an IO library that uses OS async APIs in order to suspend fibers and resume them when IO is complete. On such a system the cost of context switching between fibers is extremely low, because the fiber library need only save a very limited set of registers that the calling ABI does not assume to be clobbered by callees, whereas the OS on a thread context switch must save ALL CPU STATE. It’s quite a lot. There are definite benefits to fibers but you have to buy into them 100%, or maybe build it in to the language at a low level such that you don’t realise it’s even happening.

In YAFL a fiber has a small stack that is not pre-allocated. After a stack has been re-used some number of times, e.g. 10,000, it is released back to the OS. If that means that we don’t have enough stacks new ones will be allocated. The point here is that the memory for a stack is allocated by the OS on demand as individual pages are accessed. Most work on a fiber will be short lived and have a small stack, but sometimes some job will cause more pages to be used, and once allocated they don’t get released back to the OS. By ageing them out of the system we allow this unused memory to go back to the OS over time, whilst retaining the benefit of re-using stacks without making OS calls all the time. Stacks are held in a LRU queue so that each time a stack is requested for a new fiber it is likely to be hot, and the queue is local to a thread so it is likely to be hot in the L1 cache of a single CPU thread.

A fiber like an OS thread has a priority. That priority is set automatically when a new fiber is started to the next ordinal number available, or to put it another way, it is taken from an ever incrementing counter. Once assigned the fiber is scheduled according to its priority, so when it suspends whilst waiting for IO, and is later re-scheduled it gets to jump the queue against all the new work. It’s not totally fair, but should help to ensure low latency on IO without starving out other fibers.

At the moment the compiler doesn’t do any automatic detection of parallel workloads. It’s early days, and I need to prove the technology first. For now the compiler supports a keyword ‘__parallel__’ which constructs a tuple of each sub-expression in the parameter list, executing each sub-expression in parallel. In terms of generated LLVM IR it ends up being relatively simple, but I’ll save that for the next post, maybe with some worked examples of the necessary transforms. For now it’s good to know that the YAFL compiler emits linear LLVM IR, with hints to guide the parallel transform phase. If the parallel phase never happens, the code will still compile and generate correct code, just without parallel execution. It’s cool.

Sleep matters

You know how it feels, late at night, you have been mulling over that problem that has bugged you all day. You don’t want to put the thought down, and wonder if spending just one more hour in front of the screen will solve it.

Stop. You’re not just delaying sleep, you’re ruining your quality of sleep. Write down your thoughts, make notes on the options you are thinking of, just enough that you won’t worry about loosing your train of thought. And then walk away.

Don’t underestimate the power of sleep in solving problems. Tomorrow you are more likely to find the solution if you have slept well, but it’s hard to sleep well if you’re worried about losing important ideas.

Good night, sleep well.

Could you live with fixed size arrays only

If the trade off for a memory manager that works efficiently at high and low scale (think embedded) devices was that each array in your language must be of a size that is fixed at compile time, and cannot exceed some very strict maximum sizes, would that be a price worth paying.

If it was a fully functional language, there’s a good chance that all collection types are persistent, meaning that they are very unlikely to use large arrays ever anyway. But even persistent collections implemented in a language like Java use dynamic arrays, arrays that have a size chosen at runtime. Using a set of pre-defined fixed size arrays, say of size 4, 8, 12, 16, 24 etc up to 396 (yes an arbitrary figure, but it is meaningful) would be a workable and quite efficient alternative.

If you’re wondering what an array is, sorry, my posts are technical. If you’re wondering what a persistent collection is then your instinct might be misleading you into thinking that it has something to do with databases. Sorry, that is far from the truth. If you cannot mutate collections the methods for growing a collection are very different, as you can’t simple assign a new element at the end of an array. Instead functional languages (and non-functional sometimes, but that’s another tale) encode each modification as a tree of changes, and then each access enters that tree at a specific node to see what it looks like at that version. So, replace the word ‘persistent’ with ‘versioned’ and your instinct about ‘versioned collections’ should be better.

The key here is that persistent collections rarely if ever need to allocate a large array.

So what’s the problem, why do we want to impose such a strict limit on things. We could allow runtime sized arrays, and we could allow large arrays. If we allow large arrays, code that runs nicely on a system with lots of memory will struggle on a smaller system even if it could fit within available memory, mostly due to fragmentation. Working on smaller systems is a design goal of YAFL. Ok, so we limit maximum array size but allow them to be sized at runtime. Then we have a secondary problem of knowing when you will hit the limit. It’ll complicate code more than the effects of having fixed size arrays with nice compile time errors and likely introduce more errors.

YAFL is an experimental language, and it is a vanity project, so as always I will do as I please. I’m part way down the path if implementing array support, and will see what strings and collections look like when implemented on top of this. I’ll report back with my findings in a few weeks.

PS. It’s not just about array size. It’s about having a maximum size for any heap allocation, to reduce the possibility of fragmentation. On embedded systems this is a big risk without dipping into the murky world of compacting memory management, and that is definitely not simple.

Yet Another Functional Language

It’s been a while, and I was probably quite ill the last time I wrote anything. Since then I have had a good year, have run a marathon and had a few relapses. One thing has always held true, my utter obsession with creating the latest and greatest whatever.

At the moment it’s a new programming language that I have called YAFL (for now). It sounds kinda cool anyway, so I might stick with it.

Today I turned a corner in its development. For the first time today, I have a full(ish) compliment of operators and can print numbers. Things suddenly accelerated once I added the ability to embed LLVM IR into the language itself, reducing the compiler bloat massively.

I don’t want to write too much here, so if you’re interested, and would like to have a play, here is my GitHub repo: https://github.com/supersoftcafe/yafl/blob/main/README.md.

I’d love to chat about it if anyone is interested. Tomorrow I’m back to work, so development will slow down.

Heart procedure

I suffer from arrhythmia. As I type this my heart is doing some really weird things in my chest, and for me that’s normal. Left and right ventricles are confused, beating out of sync, slow missed beats, sequences of rapid beats. It’s not life threatening, but does impact on quality of life. Walking up the stairs is difficult, and strong physical exercise is too hard.

On Wednesday I am having a procedure done on my heart. It’s called ablation, and involves sending a device up from the groin area all the way up to the heart, and then making a small hole to get from one side to the other, and once there finding an area of electrical activity and burning it out to create scar tissue.

It takes about 3 hours. It’s done under local anaesthetic, with a bit of sedation to make me drowsy. Drowsy, but awake.

Everyone I have spoken to that has had this procedure has told me that it was not a big deal, and they are glad that they did it. I believe them, and despite being a bit nervous I am looking forward to this, to the hope that I attach to it. I want to get back to a normal live, back to my daily run and being able to walk up a flight of stairs without getting out of breath.

There are of course risks, and to accept the fact that I need the operation means that I must accept the risks, and the possibility of things going wrong, of the life changing impact of things going wrong. It’s at times like this that I can understand why some people choose blind faith over a desire for cold hard proof. I am the kind of person that will definitely look a gift horse in the mouth, and cannot accept any hypothesis that is not falsifiable. The God hypothesis is not falsifiable, and so for me it is useless.

Still, we all need hope. If you get hope through faith, I am happy for you, I am happy that you have hope. My hope comes from another place, from a kind of fatalism, and a little bit from a buddhist idea, that it is the desire for things that makes us unhappy. I must accept that my future is unwritten, and it could be one of many, and I should not desire one future so much that an unfavourable outcome will leave me unhappy. I must accept the risks and whatever comes next in order to be calm and to be happy.

Friends… How many should you have

This is an interesting little piece on the BBC website about not having friends, or at least not many.

https://www.bbc.co.uk/programmes/p09rvdjb

I wonder, is there societal pressure to have or to at least declare that you have groups of friends. I ask this because I have never been shy about my very small and poorly defined friendship circle. One might even call it a dot.

I have always been comfortable with my own company and have not yearned for somebody to call ‘friend’. I have what I might call friends, but isn’t even the definition of friend a subjective one. Somebody might say a friend is a person you know and spend time with, whereas another might define friend as somebody you are close with and would go out of your way to help under any circumstance, and they likewise.

So aren’t we looking less at people that have fewer friends, and more at people that have a differing definition. After all, it’s a broad world with many varying ways of thinking about it.

I have once heard of a definition of friend as somebody you may not speak to for ten years, and then out of the blue you’ll call them (or they you) and you’ll pick up on the last conversation as if it were yesterday. I personally like that definition, it really speaks to me as a kind of kindred spirit.

So do I have many friends. I have many people I share my life experiences with, and maybe go to the cinema with. Of those I have a tiny set of people that would go out of their way to help me, and I likewise for them. And then of the final definition, somebody with whom I could pick up on a conversation after ten years… Well, maybe I am being too picky.

Do we need another functional language?

I want to create a new functional language.

Most of you are not developers. This is not for you… Sorry, but that’s how posting works, everyone gets to see it. Others are, and will be tutting at that first line. “What, another one!!!” is what they’re thinking, and I agree. But, I have the bug. I am doing this.

At this time the most popular and/or forward thinking languages are crossing the line between imperative and functional. Kotlin is imperative, but with a lot of functional goodness. Haskell is pure functional, but has IO monads as a way of behaving more like an imperative language.

I wonder if we can do better. This is what I am thinking:

  1. Grammer and vocabulary that are familiar to the imperative programmer. To my mind one of the drawbacks to the current crop of pure functional languages is their heavy reliance on symbols and strangely named things that make most people scratch their heads. When you spend enough time the concept of monad turns out to be quite simple, it’s the name, descriptions and documentation that suck. Let’s pull back a bit and use a vocabulary that the majority of developers will recognise. I am thinking of something that straddles the line between Kotlin and Python, with a light sprinkling of Haskell.
  2. No standard library. That doesn’t mean that there aren’t standard algorithms or structures, just that there is no external dependency beyond the binary itself and libc. Ideally a small hello world program should load as fast as its C equivalent and not be more than twice the size.
  3. Following on from point 2, it should use reference counting memory management. Garbage collection implies a large and complex algorithm to manage the heap. We can use the functional nature of the language to reduce the overhead of reference counting (just like Haskell reduces the overhead of GC), and steal some tricks from Swift.
  4. Implicitly parallel. You have a four core machine!.. Great. No special tricks. No understanding threads or locking semantics. It just works… So… No pressure.
  5. No curly braces for basic blocks like Kotlin has, but also, no indent based grammar like Python has. It can be done, you just have to make the decision very early so that grammar decisions are made wisely to stay on the path.
  6. Generics. Kotlin and Haskell have generics, albeit by different names. Python has duck typing. I’d like to lean more towards the Haskell end of the spectrum, but using a more Kotlin inspired syntax.
  7. Unused memory should be released to the host quickly.

This is just a starter list. I have began prototyping, with the goal to have a very simple hello world working soon. That means getting heap management working, function calling, type inference and IO.

What do you think. Am I a fool to try, or do you with me luck?

My perfect laptop exists

I have seen my perfect laptop. It’s fully configurable, fully user serviceable, fully upgradable, has a 3:2 display and the manufacturer is even working with major Linux distros to make sure it’ll work. Normally that makes me think of a bulky machine, but no, this time it’s compact as well!

Unfortunately it is not available on our lovely shores yet, but I have registered interest and hope that it will come.

Here’s a great review by Linux Tech Tips.