So I'm back into the C# world. A world of elegant languages, beautiful modern language features, clean implementations, great documentations and terrible ecosystems...Windows. Though I have to admin I have been working more with .NET Core on Linux and docker than in Windows lately, so the C# world is looking better than it ever has. I've been thinking about posting a quick note about my experience with .NET Core for a while so maybe my next blog will be something along those lines..stay tuned.
Back to the main point. C# and .NET framework generally try to stay true to the definitions of many language concepts warming your local computer scientist heart. Yet, sometimes this purist's attitude can get in the way of useful language constructs which otherwise don't quite fit into theoretical definitions. I'm talking about enums people! In C#, from eden to current version 7, enums have always remained true enumerated type definition: A data type consisting of a set of named values called elements, members, enumeral, or enumerators of the type. Most of not all languages support this, as a matter of fact you'd probably be very hard pressed to find a language that doesn't define an enum type. Though, many other modern languages, provide an expanded functionality enum type, one that does more than just enumerate values.
I'm not going to argue whether that's a good thing or not here; that's a topic of a major discussion. Rather, I'm going to show approaches of implementing what I call "advanced enums" in C#, since they aren't natively supported. With these techniques we will see, many advantages of more sophisticated enumerations, their use and implementations. Take this with a grain of salt if you are a language purist you probably won't like what you see. I'm going to try and change you mind.
There are different terms that describe the job of someone who creates software, and some such terms carry a type of stigma along with them. Here's my attempt to rant about what I think the difference is between two very popular terms: Software Engineer and Programmer.
If you have ever looked into Scala at any point you would’ve heard the term Algebraic Data Types
The goal here is to attempt to explain what that is since it’s kind of an obscure concept from an OO view point. Even though you could draw a parallel between algebraic types and Composition vs Inheritance, it’s not quite the same thing. So it’s worth while to try to understand ADT for what it is.
With all the hype and buzz around micro services and containers and APIs and (insert buzz word here) many in the industry are talking about the CAP theorem. It seems like every software engineer is expected to know what it is all of a sudden. OK, that maybe an over exaggeration, clearly if you're not building distributed applications you won't have a clue as to what I'm ranting about here... But lets face who isn't writing distributed applications these days?
So, the age-old question, Java vs. ‘fill in the blank’. If you are a java developer (sorry Engineer), then you love Java, and you spend time arguing with co-workers about why Java is better or at least not worse than their language of choice. These arguments never end, and will never end, that’s because there are so many languages out there and every month some new language is created which is claimed to be the solution to all our problems.