Our Story

Safe Software Solutions is an international company founded in 2008 in the second smallest state in the union and it is the commercial entity behind the CommandGit product.

Wilmington, Delaware

CommandGit Journey

For the past 30 years, I have been working as a software engineer and have always been interested in developer tools that make my job easier. During my free time, I have created several free automation applications and utilities, allowing me to explore technologies I may not have had the chance to use in my daily work. Many of these projects were driven by the hope that they could help my colleagues, while also providing me with the opportunity to stay up-to-date with the latest trends in the industry. It has been very rewarding to see my software being used by my work peers.

In 2012, I attempted to introduce Git at my workplace as a replacement for the outdated Subversion version control system. Some of my colleagues were hesitant to adopt it due to its command line interface. To address this issue, I began developing CommandGit, a graphical user interface for Git that would make it easier and more user-friendly for my colleagues to use. By combining the power of the command line with the simplicity of a graphical interface, I hoped to help my team overcome their hesitation and fully embrace Git.

As a developer, I understand the challenges that come with using a command line interface (CLI). I also recognize the benefits of using a graphical user interface (GUI) to make certain tasks easier and more approachable. With these ideas in mind, I set out to create a tool that would combine the power and flexibility of a CLI with the simplicity and accessibility of a GUI.

My goal with CommandGit was to create a transparent, user-friendly tool that would enable developers to easily create and execute Git commands and scripts, while still providing them with the knowledge and understanding of what those commands were doing behind the scenes. I wanted to create a tool that would not only serve as a replacement for the command line, but also as a learning tool that would help developers gain confidence and familiarity with the CLI.

I wanted CommandGit to be a tool that would allow developers to quickly and easily create and execute Git commands and scripts, while also providing them with the ability to see and understand what those commands were doing behind the scenes. By combining the power of the CLI with the simplicity of a GUI, I believe that CommandGit can help developers overcome their hesitance to use the command line and fully embrace the benefits of Git.

In the early stages of development, I used C++ and MFC to build CommandGit. However, after a year of slow progress, I decided to switch to C# and Windows Forms in order to speed up the development process. I was using these technologies in my day job and believed that .NET would continue to improve and address any deficiencies in the platform.

As I continued to work on CommandGit, I realized that I had invested a significant amount of personal time into the project and began to consider monetizing it. At the same time, I recognized that the Windows Forms user interface was functional, but did not have the modern look and feel that I wanted for a commercial application.

Throughout the development process, I relied heavily on Git and a bug tracking system to organize my work and track progress. As CommandGit became usable, I started using it for all of my Git commands related to the development of the application, which allowed me to test it extensively and quickly fix any bugs that I discovered. Overall, the development of CommandGit has been a valuable learning experience for me and has helped me to improve my skills as a developer.

Back to coding

To improve the appearance of my Windows Forms application without the use of third-party plugins, I decided to redesign and re-engineer the GUI layer from the ground up. This required additional time and effort on my part, but I believed that it would be worth it in the end. I chose to use WPF for the new GUI because it offered greater flexibility and ease of development compared to other options.

The switch to WPF proved to be a valuable decision, as it enabled me to create more attractive and user-friendly screens for my application. The built-in support for DPI awareness and the ability to easily size and arrange screen controls made the development process much smoother. Additionally, the use of WPF allowed my application to look great on high-resolution screens without any fuzzy text elements, which was a significant improvement over the Windows Forms version.

Throughout the development process, I remained committed to avoiding the use of external libraries in my code whenever possible. This allowed me to maintain control over the entire application and ensure that it met my standards for quality and performance.

.NET 5

When .NET 5 was released, I saw it as an opportunity to convert my application to the latest version of the .NET. This would make the deployment and distribution of my application easier and more straightforward. Additionally, the performance improvements in .NET 5 were too significant to ignore.

While I did miss the faster performance of my C++/MFC applications, the ease of development and deployment offered by .NET more than made up for it. The self-contained publishing option in .NET 5 was particularly useful, as it ensured that my application would run on my clients' computers regardless of the Windows environment.

Overall, I am confident that the switch to .NET 5 was the right decision for my application, and I look forward to continuing to improve and optimize it in the future.

Backend API and Cloud Computing

I started to look at a way to implement my licensing and the application update model via a cloud service. There were few to consider. I ended up choosing Azure, since I used AWS at my daytime job and I thought it was too much to handle on my own. Azure serverless option seemed like a good alternative. I tried serverless functions since they are a part of Azure pay-as-you-go model, and you only pay for what you use. I spent all my free time implementing this technology on the back end. Since my app was a hybrid app at this point, meaning that it was a desktop app with cloud components, I quickly realized one downfall of the serverless architecture. It would fall asleep. Yep, my desktop application would take up to 30 seconds to start up while checking the user's license or the state of the trial period. This was unacceptable. I found workarounds like the Azure Logic Apps which worked well, but defeated the purpose of a pay-as-you-go model. Basically the logic app would run on a schedule and wake up the serverless API every five seconds. This means that my users would still experience long loading times that were dependent on the five-second window and raise the serverless cost. This was unacceptable, especially for a desktop application that is supposed to be snappy unlike its web app cousins.

A New Beginning

As I previously mentioned, I used CommandGit for all of my Git interactions and it worked well for me. I enjoyed using my app for development and the ability to test it while continuing to work on it. However, the design and requirements for my project have shifted towards cloud and backend API development, which has required me to use cloud CLI and deployment solutions that I am not comfortable with. As a single developer, these solutions seemed like a waste of time and overly complex. In addition, I was tired of having to constantly type CLI commands. While CommandGit took care of my Git needs, the numerous other CLI commands were starting to become tedious. My backend solution is still a work in progress and I need to do more research, which will likely involve learning new CLIs.

As I continued to think about the problem, I had a breakthrough: I needed to make CommandGit work with any CLI. At first, I thought this was a crazy idea and that it wouldn't work. But as I considered it more carefully, I realized that it was not as crazy as it initially seemed and that I was actually closer to a viable solution than I had thought. With this realization, I became more confident that I could make CommandGit work with any CLI, and I began to explore how I could make this happen.

With this new goal in mind, I set out to make it happen. The first step was to integrate other shells into CommandGit. I already loved using Git Bash for my Git commands, so I decided to add Windows PowerShell and the Command Prompt as well. This allowed me to write and execute PowerShell scripts or CMD commands from a single button, depending on the type of terminal I had configured for the project or opened from the toolbar. This added flexibility and made it easier for me to work with different types of CLIs from within CommandGit.

To enable running any command at any time, I redesigned the flow of CommandGit. The new toolbar included buttons for switching between shells, which made it easier to use different CLIs. I also wrote about 400 commands and organized them into separate categories to help users get started. Initially, I hadn't planned on doing this, but since I was using the app regularly for my own development, I decided to share my knowledge with others by including some built-in commands. This added value to the app and made it even more useful for developers.

In designing the main application screen, I tried to keep it slim and simple, with the idea of having it displayed on the left side of the terminal as an easy-to-use addition without taking up the full screen. I wanted the process of running commands to be straightforward and transparent, with a list of available commands on one side and the terminal screen on the other. I also added the ability to save the screen positions of the terminal and the CommandGit application's main screen, so that users could easily maintain a consistent and efficient workflow. Overall, I aimed to create a simple and intuitive interface that made it easy for developers to use CommandGit with any CLI.

As I continued to expand the capabilities of CommandGit, I had another realization: in addition to executing individual commands or groups of commands with a single button click, users could also execute commands on a schedule in the background. This was a powerful idea with many potential uses, and I could see myself using it in my own CommandGit development, such as checking the health of my Linux instances in Azure or quickly pulling NGINX error logs from a web server. I didn't have time to set up and manage open-source telemetry tools, which seemed like too much work and increased the risk of failure or wasted time learning the wrong technology. The ability to schedule commands in CommandGit was exactly what I needed and I was excited to make it available to others who might find it useful.

To be able to understand the outcome of scheduled commands, I created a command logging system for CommandGit. This system included a user interface and a logging mechanism that captured the output of all scheduled commands. I also added filters and log categories to make it easy to navigate the log data and quickly find the information that was relevant. This enabled users to track the results of their scheduled commands and understand the output of those commands more easily.

In some cases, users may want to know the results of their commands at the time of their execution. To address this need, I added the ability to display the results of commands as they are running in CommandGit. This allows users to see the output of their commands in real time, rather than having to wait for the command to finish before viewing the results. This can be useful for monitoring the progress of long-running commands or for getting feedback on the results of commands as they are executed.

The distribution portion of the scheduling was born. I designed and implemented a way to capture the command output. Any scheduled command could report its outcome and notify the user when it was executed. Initially, this was a popup screen on the desktop, which was ok, but I then realized that working in a team environment required some form of distribution of command output. I quickly started the work on the command distribution system. I added the ability to send data via email, Slack and Teams. I was quite happy with this approach and the app became even more useful. That was great, but what if I only needed to distribute the command output sometimes. Let's assume that the NGINX logs had no errors and the output of my scheduled command was boring and negligible most of the time. The distribution of this type of information would be disruptive for my team if someone had to look at a new Slack message every 5 minutes, as I configured the schedule for 5 minutes intervals. I sure wanted to catch any issues withing a very short time of them occurring. That part was totally correct and made sense. The distribution on the other hand, did not meet the necessary need-to-know basis approach. No one needed to know that everything was running smoothly and there were no issues. It should be more in line of "no news is good news".

The conditional distribution criteria portion of the scheduling was born. Since I already had the command output, I could scan that data and match it to the criteria on the distribution screen. For example, only distribute the command output if the search term "up to date" was found in the command's output. Let's say this was a git status command. I wanted to get notified when my server branch had a new commit before I had too many conflicts to wrestle through. The outcome of this implementation worked well and another chapter of this application felt right and I could move to the next breakthrough.

Another large portion of the design was the search capability. Since we are dealing with many buttons across multiple categories, I had to invent a way to easily search and find relevant commands. This was not easy and I have tried many different solutions. I finally arrived at what it is today. Not the most straight forward, but I can always find what I'm looking for relatively quickly even with many commands in the project.

Finally, I was able to configure and use any CLI I could get my hands on, right from within CommandGit. That was a great accomplishment and I truly enjoyed the end result. Maybe it's not the last Coca-Cola in the desert, but it gets the job done and beats hand typing or searching for old commands half of my day, not to mention, many convenience functions like color coding categories, scheduling commands, distributing command results to others or setting up safety message screens before executing sensitive commands. And again, I started to move on to the next revelation as I began to use CommandGit with all its new features.

I will spare you all the details here, they are all mentioned in the help file. What I can tell you for sure, the development of this application will not stop for a very long time. I am always looking for new and innovative ways to improve it. I always welcome any suggestions and I always welcome any constructive criticism. We all learn from our mistakes and this is not an exception to that rule. Please send me a line, if you love the app or hate it, let me know what I can do to make it more useful for you.

Payment Model

Once the main application was developed, I had to get back to something that was lingering in the background for a long time now. How could I distribute the app as a desktop application, but yet implement a payment model and still keep the application secure for me and for my customers? Sure, Adobe has done it with their suite of applications that I enjoy and use, but they have a gazillion of developers and, basically, unlimited resources to do that with.

I needed to come up with a payment system and a secure implementation of that to assure that the licensing model was somehow useful. Most SAAS implementations are straightforward. Everything is Web based so you control the access via a user signup and you know who can log in and who cannot. In my hybrid model, it's not so clear as CommandGit is a Windows application that needs to be installed locally and manage its paid users via a cloud backend.

I also wanted a fully functional trial run of CommandGit without any signups or user payment information. I know I don't like to give out my email or my card info just to try an application that I may not like and care to use after the trial ends. Not to mention that some apps make it really difficult to cancel such options and I just didn't want to put my users through that.

This was not the easiest thing to do, but the end result was a somewhat acceptable solution. I used Microsoft Azure for the Cloud API and with some JavaScript and C# code in the Windows application, I was able to implement something that is manageable. Nothing is hackerproof and my app is not an exception to that, but my earlier objectives were accomplished and I am quite happy with the outcome.

The Current State

At the moment (2022), the main application is a C# and WPF implementation utilizing the .NET technology.

The cloud API was moved from the serverless model to a Linux based OS running on Azure and it is utilizing NGINX as the Web Server with all the goodness NGINX has to offer.

This website is also running on a Linux OS with NGINX.

The application is still a work in progress and being improved as much as possible. So many developers, DevOps or Sysadmins are now working from home and so many were thrown into the CLI world without having a chance to adequately prepare. Then there is the group that never really subscribed to the CLI paradigm and never cared for it more than it was necessary to accomplish tasks at their place of work. I have to admit, I am partially in that group. This is why I thought of creating CommandGit in the first place. Sure, I wanted to help others, but I also wanted to help myself just as much. I think there are many of us that can benefit from this approach of a GUI/CLI duo. So give it a chance, it maybe worth your time.

All the application development and design is done by me. I am happy to learn new technologies and just as happy to dive in and start coding and implementing my visions. If there is a person you would like to blame for a badly implemented feature, that would be me:) Feel free to send me your hate mail or a few words of encouragement. I read everything, so even if I don't have time to reply right away , rest assured, I read your message and I am either learning from it or I deleted it;)

And yes, there is a free trial, so please take a look and hopefully you will find it interesting.

Thanks for reading.

Daniel Hofman

DH

The founder of Safe Software Solutions, LLC