List<T>
. This time we will look at another generic collection defined in System.Collection.Generic
namespace which is Dictionary<TKey TValue>
.
The most important implementation elements of the Dictionary<TKey, TValue>
:
buckets
- set of elements with similar hashesentries
- elements of the Dictionary<TKey, TValue>
freeList
- index of the first free placefreeCount
- the number of empty spaces in the array, not at the endcount
- the number of elements that are currently in the Dictionary<TKey, TValue>
version
- changes as the Dictionary<TKey, TValue>
is modified…and a few more equally important elements that Entry
contains:
_key
- key to identify element with TKey
type_value
- value of an element with TValue
type_hashCode
- numeric value used to identify an object in hash-based collection_next
- describes the next item in the bucket
The dictionary uses an array of Entry
structures to store data. To get a better understanding of how this really works, you need to know what exactly a hash table is.
Hash table (also called hash map) is a data structure that implements an associative array which allows you to store pairs - each pair contains a key and a value.
With the key, we are able to quickly find the value associated with the key. According to the dictionary’s equality comparer the key is unique within the entire associative array.
In .NET the hash table contains a list of buckets
to store values. A hash table uses a hash function to compute an index based on the key. This allows us, for example, to find the correct bucket
with the value we are looking for.
The same is the situation with other operations - by calculating the hashcode we can add an element to the appropriate bucket
with the index designated for it. We will continue to explain this in more detail later in this post.
The value in the bucket
indicates the index in entries
+1. As it is in infographic: a value of 3 points to index 2. And a value of 2 points to index 1.
The next
property points to the next item that is in the same bucket. In the picture it is additionally marked with an arrow. When next
is equal to -1, it means that it is the last item in the bucket
.
In the previous section, we mentioned that the key must be unique. Now let’s look at an example in which we want to add a new Entry
to our list. When the key doesn’t exist and we have one free space not at the end of the array. The first operation in this case is to compute hashcode
and find a suitable bucket
using the formula: hashCode
% buckets.Length
. When we find this bucket
, we compare hashCode
of the new element we want to add to the array with the hashCode
of the first Entry
, then move on to the next one (pointed to by next
) and repeat the comparison.
If none of the existing hashCodes
are the same then we add a new element to the first empty space. Our version
grows and the value of bucket
points to the last added element. If the modulo result points to a bucket
that already contains an item (we call this situation a hash collision), after adding a new element its index from the entries
array is set to the value of bucket
and the next
field is set to point to the previous item, resulting in a chain of items.☝️
Let’s look at the case where we want to change the value of an existing Entry
. In the first steps, nothing changes from the previous example - we calculate the hashCode
and find the appropriate bucket
. After that, the hashCodes
of the elements in the buckets
are compared. When we have verified that the hashCode
of the new Entry
is the same as the existing one, keys comparision occurs. If the keys are the same, value is overwritten and the version
is incremented by 1. It’s so simple and interesting at the same time! ✨
Let’s look at another interesting situation. If we want to add an element to the dictionary and there is no more space in it, before this operation we should resize the array. The first step is to create an empty enlarged array whose size is equal to the nearest Prime Number of doubled the initial size of the array. We use Prime Numbers to minimalize probability of hash collisions. The next step is to copy the entries
and calculate the hashcode
for the new element and find the right bucket
. Then, as in the previous examples, add the element to the first free spot of the entries
array.👇
We’ve already learned how adding elements to the dictionary looks like, it’s high time to know the implementation details of removing them. The initial steps remain the same - the hashCode
is calculated and the appropriate bucket
is found. Then a comparison of hashCodes
takes place and a check is made to see if the keys are the same.
After that, the key is removed and the version
grows. When an element from the array is removed, the space it occupies goes into the chain freelist
. The Dictionary stores the index of the next element using the next
property of the Entry structure. This way we know, in case we want to add a new element after deletion, which space it will occupy first - entry[1]
and when adding one more element in turn - entry[0]
.👇
If you enjoyed this post and want to keep learning more, check out our social channels💜 Twitter Discord Instagram
https://github.com/microsoft/referencesource/blob/master/mscorlib/system/collections/generic/dictionary.cs https://docs.microsoft.com/pl-pl/dotnet/api/system.collections.generic.dictionary-2?view=net-6.0
]]>In C# List<T>
is a generic collection that is used to store any number of strongly typed objects as a list, where T is the type of objects. It allows us to perform a number of operations to find individual list items and modify them with operations such as adding, deleting or sorting.
The List<T>
class implements: ICollection<T>
, IEnumerable<T>
, IList<T>
, IReadOnlyCollection<T>
, IReadOnlyList<T>
, ICollection
, IEnumerable
and IList
interface. List<T>
class is defined in the System.Collection.Generic
namespace.
If we look deeper, internally the List<T>
stores all elements as a reference to a single array of elements of T type.
The most important implementation elements of the List<T>
:
items
- elements of the List<T>
size
- the number of items that are currently in the List<T>
version
- changes as the List<T>
is modifiedThe List implementation uses an underlying array for storing items. This underlying array length is called Capacity.
Let’s analyze the first operation that adds items to the List<T>
. For a complete explanation, you will need to look at Capacity in depth. The Capacity value for the default constructor is equal to 4. Even if you add less elements as in our example - there is room for 4 elements in the internal array.
When we want to add a new element to the List<T>
, the first step is to check - if there is enough space for it in the array. If there is, we add item to the end of our list and the version is incremented, because the array has changed.☝️
You may ask the question, what if I want to add more than 4 elements to the array? What now, since we have no more space for new items? In this case, before adding an element, the array should be resized.
In this case, the List<T>
Capacity is doubled from the previous array. A new array is created and List<T>
is modified, so version is increased by 1. In the next step, the values from the old array are copied to a new array with larger Capacity in our example equal to 8.
After all these operations we have enough space to add a new element to the list.
Another operation we can perform is to remove an element from the List<T>
. If we remove item with a particular index from the middle of the list, then all subsequent elements change their indexes 👇
Just like in the example, the index of element D has changed after deleting C. The size of the array decreases and the versioning mechanism remains the same. When we remove an element, as in the previous examples our version grows.
The last issue we will touch upon in the context of the List<T>
is the AsSpan() method.
Span<T>
is a ref struct
that can be stored only on the stack. This structure contains a pointer to a specific memory location memory and length that describes how many elements from the memory location given span has.
As we can see on the image the AsSpan() method creates a span and sets a pointer to the first element of the array that stores values of the List<T>
. By wrapping the elements of the List<T>
in Span<T>
structure, we can operate on a subset of data without allocating additional memory. This is a great example of how using special types that allow slicing can increase the performance of our code ✨
Want to know more? Follow our social channels!💜 Twitter Discord Instagram
https://github.com/microsoft/referencesource/blob/master/mscorlib/system/collections/generic/list.cs https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1?view=net-6.0
]]>What exactly is the Discord mentioned by us in the title? Let it answer by itself:
Discord is the easiest way to talk over voice, video, and text. Talk, hang out, and create a place to belong with your friends and communities.
Let us explain- Discord is a free application for voice, video or text communication. It sounds a bit enigmatic, therefore we will briefly describe the history of the uprising and more importantly, where the idea for it came from.
The first need arose: it’s fun to play, it’s even better to play and talk. The possibility of talking during an online game existed before, it was possible to communicate using the available solutions like IRC and then skype. None of these solutions fully satisfied the players. Out of need, an idea appeared, followed by a project. As the creator- Jason Citron writes himself:
Discord was born from the need to create a better way to chat and spend time online while playing video games with friends. […] We designed Discord for talking. There’s no endless scrolling, no news feed, and no tracking likes. No algorithms decide what you “should” see.
Since 2015, Discord has been constantly evolving and gaining an increasing number of users who surprise even the creators themselves with how they use their service.
Looking at the entries of dissatisfied users, definitely not! The author himself admits, that:
the first few interactions someone has with our service could be intimidating because Discord is complex with many features.
The complexity of the platform itself is not its only downside. Some users accuse the platform of imperfect anti-intrusion security and delay in introducing improvements. Apart from the above-mentioned disadvantages, the desire to create
a server through which the .NET community can share their thoughts and experiences in one place.
In order to organize this place, we created several thematic channels e.g.: code design, code review, architecture, performance and many others.
It’s not a forum nor a question-and-answer page like StackOverflow - no tags, no boards, no threads (though you can create them!).
We have experienced what over 350 million (!) Discord users have already experienced:
me irl: 😐
— Joney (@Joneyology) October 1, 2021
me on discord: 🚦💺🗽✈️🚥🚉🎡🚉🗿🚀🏟🚀🚦🚉🚊🗼🗼🧢🗼😓😓😓😍💀🧢🚦😚💪😉🗿🥺🚉😍🚦🧢😚😉🚥👋💺😫🗿🐈🚀😍😳🧢💪😉🚦😫🚦🐈💪😕😓😚😓😂🗽🤒😓👿😓🤕👶🧔♂️🧜♂️🤱🧚♂️🗽🚀🗿✈️🏟🚈✒️🗳🧮📋📎📋📐📆✒️🗃🈶⛔️🈚️🛑♑️🈲♏️🧂🧃🧂🥤🏐🍼🥏🍯
and we must admit that it is worth trying, because this place is a pleasure to stay in touch and talk to other living people.
Are you interested? Jump into our Dotnetos server! 👇
]]>https://blog.discord.com/how-were-making-discord-more-welcoming-for-everyone-ee152f198c60
https://blog.discord.com/your-place-to-talk-a7ffa19b901b
https://discord.com/blog/an-update-on-our-business
https://twitter.com/discord
The year is coming to an end and with it we expected a return to the pre-pandemic state, but reality presented a different scenario. It turns out that some changes have entered our everyday life for good and returning to the standards from before 2020 is simply impossible. Those changes affect all aspects of life, but in this article we will focus mainly on the labor market, narrowing down to the IT industry. We will try to analyze them in terms of both the employee and the employer.
In 2020 the pandemic surprised the whole world and forced, if not to introduce, then to accelerate certain changes. The unstable situation and uncertainty about what the upcoming months will bring stopped all those who planned to change jobs. This, as a consequence, had an impact on the current year, and will probably leave its mark in the following years. Remote work, data migration to the cloud or the technological revolution taking place in front of our eyes have superimposed on an increased amount of work that can be done only by qualified specialists, and those are disproportionately small to the current demand on the IT market. The growing competition for an employee has only confirmed the excellent situation that IT specialists currently have. Everything indicates that in the near future many companies will have to face the challenge of employee retention. This should not come as a surprise, because the estimated value of the IT services market for 2022 is expected to be about 5.3 trillion US dollars. The upward trend is expected to continue until 2024, with an increase of 5% per year, which in the face of the global crisis is particularly important information for companies operating in this industry. Companies with a strong specialist background will be able to compete for the highest profits. What do employers have to prepare in order to retain their specialists and thus gain new clients? To answer this question, it is necessary to find out what motivates employees.
We will start with the theory presented by the American researcher Abraham Maslow.
Maslow’s classic hierarchy of needs.
We see that on its top are the needs of self-development, belonging or appreciation of work satisfaction. The IT industry employees are one of the few groups with a relatively high degree of satisfaction with their salary. Thus, for most, the financial aspect is no longer the decisive impulse to change jobs. The most common factors influencing the willingness to change jobs are:
Important aspect in this discussion is working out the work-life balance, because this significantly affects productivity, job satisfaction and commitment to it. There is also a growing awareness of jobseekers, choosing employers who have clear values, consistent with their own.
The State of Octoverse quotes interesting data: over 86% of respondents from the IT industry after the pandemic expect only remote or hybrid work, and thus the recruitment process itself enables this form of interview.
Starting a new fully remote job causes some trouble. The most common problem during the onboarding of new employees is communication and bonding with teammates. This can result in decreased productivity. The solution seems to be to set a transparent onboarding process, taking into account 1:1 meetings, assigning an onboarding buddy and among others, providing up-to-date documentation. However, despite these difficulties, no change of direction is to be expected, as the possibility of remote and even hybrid work has become a new practice. The changes that are taking place are also evidenced by the analysis presented by Awareson, which indicates that 9 out of 10 surveyed IT employees plan to improve their technological competences next year. It only confirms the need for self-improvement that the group we are talking about presents.
The information we have provided above should make employers reflect and take some steps now. We want to focus especially on the aspect of the need to raise qualifications by specialists, because we have prepared some excellent on-line courses on .NET related topics:
within which we discuss each of the topics listed above in a detailed way. The course authors are brilliant specialists, repeatedly awarded with MVP titles. The courses are dedicated to specialists at intermediate and advanced / expert levels. They provide students with the latest solutions and enable contact and exchange of experiences via a specially prepared platform and discussion forum. This is an excellent solution, especially nowadays when on-site training is difficult due to the remote / hybrid aspect of work. What’s more, we provide special discounts for groups of more than 10 people.
To sum up, the upcoming months and years will be a challenge for employers, especially from the IT industry. On the one hand, the fight for the client, on the other, for the employee. As the forecasts show, there is something to fight for, as the value of the market is still growing and there is no indication of changes. Investing in a highly qualified specialist is not just a cost, it is an investment for years. For employees, it will be an excellent time to start salary negotiations or look for a place where they will be able to pursue their ambitions in highly challenging technical work or count on additional profits, such as training and courses.
Source materials:
]]>Global information technology industry forecast 2019-2022, by region
Work–life balance, retention of professionals and psychological empowerment: an empirical validation
Please Turn Your Cameras On: Remote Onboarding of Software Developers during a Pandemic
dotnet-monitor
with Prometheus and Grafana
Everyone likes dashboards! So let’s make one! dotnet-monitor
was announced last year as an experimental tool for exposing REST endpoints to make diagnostics/measuring of your apps simpler. It could be seen as a “simple” ASP.NET Core app that wraps Diagnostic IPC Protocol to communicate with the target .NET application - exactly the same which is used when we use CLI Diagnostic tools like dotnet-trace
or dotnet-counters
. I personally perceive it simply as the REST wrapper for those tools. So, for example, we have /trace
endpoint to start a session or /logs
endpoint to capture logs. And finally, very recently dotnet-trace
was announced as production-ready part of .NET 6 😍
But we are interested in /metrics
endpoint. As the docs says, it “captures a snapshot of metrics of the default process in the Prometheus exposition format“. Awesome! BTW, it’s pretty simple format so such endpoint returns text data like:
# HELP systemruntime_cpu_usage_ratio CPU Usage
# TYPE systemruntime_cpu_usage_ratio gauge
systemruntime_cpu_usage_ratio 0 1632929076109
systemruntime_cpu_usage_ratio 0 1632929076111
systemruntime_cpu_usage_ratio 0 1632929086110
# HELP systemruntime_working_set_bytes Working Set
# TYPE systemruntime_working_set_bytes gauge
systemruntime_working_set_bytes 1529000000 1632929066112
systemruntime_working_set_bytes 1529000000 1632929076110
systemruntime_working_set_bytes 1529000000 1632929076112
...
# HELP systemruntime_time_in_gc_ratio % Time in GC since last GC
# TYPE systemruntime_time_in_gc_ratio gauge
systemruntime_time_in_gc_ratio 0 1632929066112
systemruntime_time_in_gc_ratio 0 1632929076110
systemruntime_time_in_gc_ratio 0 1632929076112
We simply see here all System.Runtime
counters (the same as observed by dotnet-counters
) with their values. Having said all that, let’s see how we can configure all necessary pieces together to have a working Grafana dashboard with some GC metrics👀.
Let’s use memoryleak .NET 5 app by Sébastien Ros as I love it and I’m using it a lot in our .NET Memory Expert course - because of its self-measuring capabilities. So, just do:
❯ git clone https://github.com/sebastienros/memoryleak.git
… and we are almost ready! To make things more clean and fun, I will put this app into a container using Docker. So, here’s my sample Dockerfile
for doing it (and I add procps
just as an example that you can😇):
# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish -c release -o /app --no-restore
# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
RUN apt-get update && apt-get install -y procps
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "MemoryLeak.dll"]
Then build it:
❯ docker build --pull -t memoryleak-image -f Dockerfile .
Because we will be running dotnet-monitor
from a sidecar container, we need to have some shared volume to represent /tmp
folder (used by the IPC communication protocol), so let’s create one:
❯ docker volume create dotnet-tmp
And now we are ready to run our image, mount the shared volume and expose its 80
port as port 5000
:
❯ docker run -it --rm -p 5000:80 --mount "source=dotnet-tmp,target=/tmp" memoryleak-image
Now, you should be able to visit http://localhost:5000/
to see the app drawing its own memory usage.
dotnet-monitor
We can install dotnet-monitor
as a global tool, but let’s stay with containers. There is up and ready container image available on Microsoft Container Registry so it is as easy as the following command:
❯ docker run -it --rm -p 52323:52323 --mount "source=dotnet-tmp,target=/tmp" \
mcr.microsoft.com/dotnet/monitor --urls http://*:52323 --no-auth
Note we are mounting here the same shared volume to have IPC communication possible. Now http://localhost:52323/processes
should print only a single process with pid = 1 because it is observing the application container thanks to the shared /tmp volume. And http://localhost:52323/metrics
should return similar metrics like presented before. BTW, I’m using --no-auth
just to get rid of any authentication/certificates issues for such a simple demo.
Prometheus is a free monitoring system and time series database. We need it to consume just prepared /metrics
endpoint and store results in it. Again, we could run/install Prometheus in a various ways but let’s go funny and use container again. First of all, we need a configuration file, let’s call it prometheus.yml
:
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
alerting:
alertmanagers:
- scheme: http
timeout: 10s
api_version: v1
static_configs:
- targets: []
scrape_configs:
- job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- localhost:9090
- job_name: memoryleak
honor_timestamps: true
scrape_interval: 2s
scrape_timeout: 2s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- host.docker.internal:52323
Nothing magical is happening here. Most important parts are the last 9 lines - configuring the job memoryleak
that scrapes the http://host.docker.internal:52323/metrics
endpoint every 2 seconds. The magical host.docker.internal
hostname allows to communicate from one container to another container exposed on localhost. This is networking stuff and for sure it would be done differently on real setup. Good enough for a demo.
Prometheus is available as ubuntu/prometheus
image, so let’s use it! Having configuration file we need to map it to the internal /etc/prometheus/prometheus.yml
so the final command for running it is:
❯ docker run -d --name prometheus-container -e TZ=UTC -p 30090:9090 \
-v c:\your\path\to\prometheus.yml:/etc/prometheus/prometheus.yml ubuntu/prometheus
And… that’s it! Go to the http://localhost:30090/
and you will see Prometheus dashboard. By going to Status/Targets you should see Up state for the defined target:
And on the Table/Graph panel you can sneak peak into the gathered measurements, for example by typing systemruntime_
to see the suggestions for all recorded metrics:
And eventually, Grafana itself 😍 Let’s use the prepared image again:
❯ docker run -d -p 3000:3000 grafana/grafana
After going to http://localhost:3000/
you need to login as admin
/admin
. Then go to the Configuration/Add data source and add Prometheus endpoint as one of the sources:
http://host.docker.internal:30090/
Now we are ready to Explore. As a Metric browser start typing again systemruntime
… and you should see the suggestions of already available metrics:
And… that’s it! Everything is ready. Obviously, what we would like to achieve is to have a nice dashboard(s), so we need to create one. This is not the article about creating dashboards for sure. You can find many tutorials out there. To leave you with something useful, here’s a JSON file you can import in Dashboards/Import to have such a nice graphs as at the beginning of the article:
sample-dotnet-monitor dashboard.json
Have a nice dashboarding!
PS. For me it is just mindblowing how containerization and .NET on Linux makes it all possible 🤯 We have four containers that can run on Windows that run Ubuntu - one for .NET 5 app, one for dotnet-monitor
, one for Prometheus and one for Grafana. And it all just works!💜
Let’s be clear, open source software developers build the tools we all use. The world runs on GitHub’ s open source code, each of us uses its benefits, often unconsciously. In this blog, we are not going to focus only on code, because it is the result of someone’s work, and we would like to emphasize the value of work, and specifically of the people who do it. Without a team, even the most interesting project will not develop. For maintainers, participation in projects is often the realization of their passion. Who if not enthusiasts spread their passion and commitment with others! Thanks to them, the world takes color and flavor, regardless of the field in which they operate. However, while appreciating the work of the creator seems to be simple, because you can go to a concert or buy a book, appreciating someone whose work is not signed, causes more difficulties. Should ‘invisible’ work not be visibly appreciated? We thought for a long time how to solve this and honour those who work for the OSS community and launched DotetOSS Grants.
DotnetOSS Grants is a six-month Github sponsorship. Having in mind that it is difficult to convert passion into money, we found it a nice and universal way of appreciating the work that has been put into OSS development. In the dream world we would distinguish everyone in this way, but in reality we do not have such possibilities. Guided by our own opinions and observations, we selected a few people who made the greatest impression on us. Who are they and a little more about the entire initiative, you can read on our website
At the end we would like to thank all who transform their ideas into code! No matter your role on the project, you are changing the world, your commitment is inspirational for all of us.
]]>This platform does not support…
are the famous last words an engineer can hear before they start to think about implementing it on their own. This was the case, when we thought about releasing our very first online course Async Expert. After considering various options, we agreed to use a set of existing components and gluing them on our own. Having a three engineers on board, that are capable of using C#
and serverless approach, the choice for a platform was simple.
You could ask what are the things to integrate anyway. How many moving pieces do you use to provide an online experience to attendees? Actually, there are some.
The first tool, that we use to provide our attendees and subscribers with emails, is ConvertKit. This is the mailing platform that we selected a while ago. With its tag, segments, sequences and automations, it’s a perfect tool to deliver meaningful content using emails. This is the very same tool that we use to send broadcasts, whenever we announce a new opening for our course.
The second part is related to hosting videos and downloads for our attendees. After reviewing multiple options, we selected Thinkific
to deliver these materials. Initially, we used the discussion forum in there, but as our samples and discussions between attendees often include pasting formatted code, we changed the approach and moved to a different option for hosting them.
To make the discussions fruitful and easy to follow, we chose the Discourse
. It’s an awesome tool, if you want to share code snippets. Additionally, it allows attendees to have a good well-formatted interaction between them and authors/mentors.
The payment gateway that our Platform uses is provided by Stripe
. The process is augmented to apply VAT tax properly. This is the part that we spent a lot of time. Dealing with various cases of inverse VAT, EU VAT (that is verified against the EU database) and other cases isn’t that easy! Finally, whenever the payment is done, an invoice is generated and sent to the person who purchased licenses. This, especially for bulk business orders, might be a different person from the attendees.
All the parts mentioned above are integrated by a single serverless Azure Functions app. To interact with specific components we use API and webhooks. This is done mostly by using the RestSharp and performing specific calls against specific endpoints. We tend to use a limited part of the provided API. Usually it’s sufficient to make a call or two to a specific component.
The whole ordering process is designed to not fail. All the parts that are related to the integration aspects are separated from performing the actual order. After all, it’s ok to accept an order and fix something later on. At the same time it would be a terrible mistake, if the order failed due to a third party service being unavailable at the moment.
It’s worth to mention that the Azure Functions app is run in a consumption plan. The orders are not done every single minute (we wish they were!) so we’re much better with having it started on demand.
Now, you could think that with the consumption plan the ordering process may suffer from the cold start problem. The cold start means that the app is loaded for the very first time and it takes a while to get it ready. We address it by sending a warming up request. It ensures that once the order button is hit, the app is warm and ready.
There are places where we use Cloudflare Workers, which an edge, a.k.a Region Earth, serverless platform. It provides interesting capabilities. One of them allows you to intercept the incoming traffic and augment the returned html. This is used to inject cross-promotions of our products. You can see it on your vising Dotnetos Goodies page and scroll to the bottom of it. If you visit other courses, like .NET Diagnostics Expert and scroll again, you’ll see a similar section. This is done on the fly by a specific Cloudflare Worker.
The majority of our pages uses Jekyll and is hosted on GitHub Pages. This allows us to use MarkDown (this is how this post is written at the moment). Then, if it’s needed, we can always augment the output with Cloudflare Workers.
One could think about using a bit more enterprisey tool for creating pages, but so far, Jekyll is the one that we were able to leverage in every single this. It’s worth to mention, that the injection made by Cloudflare Workers are also based on the Jekyll-ed output.
During this year a lot of things happened. There are many more platforms and services that help online creators to support their attendees. There are things that were introduced in the payment gateways that allow to lift off the burden of the tax calculations and many more. At the same time, by augmenting piece by piece our Dotnetos Platform, connecting different dots and addressing pain points that arise, we were able to make it a really good tool to work with.
We wish you frequent usages of it, either by joining our courses or simply visiting pages with all the content that we provide.
]]>We live and breathe .NET. We’ll be providing you with good news from the .NET world.
We produced a lot of courses and we’re not stopping here! More good things will come!
We’re working on bringing interesting people to this place and we can assure you that it will be awesome ;).
Make yourself at home and enjoy this place! As Brian Clark once said “Don’t focus on having a great blog. Focus on producing a blog that’s great for your readers.” - and that’s what we’re aiming for. This blog was made for you.
]]>