RubyConf 2018 – Running a Government Department on Ruby for over 13 Years by Jeremy Evans

RubyConf 2018 – Running a Government Department on Ruby for over 13 Years by Jeremy Evans

(upbeat music) – Looks like it’s starting. Good morning, everyone. Thank you for coming out. I’m excited and honored to be presenting here at RubyConf. Today, I’ll be speaking
about my experiences running the programming unit for a
small government department for the last 15 years and using Ruby as the main programming
language for the last 13. My name is Jeremy Evans and
I have used Ruby since 2004. I’m the maintainer of
numerous Ruby libraries, the most popular of which is Sequel, the database toolkit for Ruby. I also maintain a web
toolkit for Ruby named Roda, as well as an authentication
framework named Rodauth that builds on top of Sequel and Roda. Now, I did not start
out wanting to maintain these and a bunch of other Ruby libraries. The reason I maintain
these libraries is because they form the core of
many personal projects as well as the core of the applications that I’m responsible for at work. I work for the California State Auditor and our office is in the capital
of California, Sacramento, but our auditors work all over the state and the mission of our department is to promote efficient
and effective government by performing independent evaluations of other government departments. We publish reports of our audit findings and we make recommendations to other government
departments and legislature. Our equivalent at the national level in the United States would be the Government Accountability Office. All 50 states have an
equivalent office to ours but the degree of independence and exact responsibilities
vary state to state. Before I go any further, please be advised that all things in this presentation are my personal opinions and not the opinions of my department. Lawyers have to have that. Alright, I’ll now talk
about software development as it is typically done
at other departments in this state. I will simplify it substantially in the interest of time
and exaggerate slightly, only slightly, for comedic effect. I’m not sure how similar
this is in other states or countries, but I’m guessing there is some overlap. Government software development revolves around large projects. In many cases, there is an
existing system in place, but it has various worts such as running on a mainframe. So, executives get this idea of building a modern, always called modern system and throwing away the old previous system. After the government gets the idea that they want to build
this modern system, before they can even start the project, they’ll spend a year going through a four step project approval process with the Department of
Technology and this is not one of the times I’m
exaggerating even slightly. The median time for this
project approval process is over a year. After approval, the government will start by developing a long request
for proposal detailing what they think the requirements
are for this system. This request for proposal will be prepared by analysts and program managers. Not the involvement of
the internal developers who work on the current system. Then the government will list proposals from companies to build the system and after reviewing proposals and removing those that are not considered
acceptable, the government will follow standard government
purchasing regulations and award the contract to build the system to the company with the lowest bid. Payment on the contract will be based on a check list of deliverables and there will be little consideration for easy the software is to use and no consideration for how easy the software will be to maintain. The contractor will build the system using their own developers
who are incentivized to check off the deliverables
as quickly as possible. The contractors will use C# if using a Microsoft stack or
they will Java otherwise. Near the end of the contract,
the software developer will train government staff
on how to maintain the system. There will be minimal if
any unit or model testing done during development. There will usually be good integration or acceptance testing, but the testing will be done manually using check lists by system contractor, government staff or an external, independent validation and verification vendor. Now, in some cases, the
contractor will base their solution on an
existing expensive enterprise resource planning system
such as SAP or PeopleSoft and then the contractor
will heavily customize the installation to meet
the system requirements and this will allow
the contractor to claim that we’re just using common,
off the shelf software in their request for proposal, when in reality, so
much custom work is done that the benefits of using
common off the shelf software do not apply. There will be a large amount of oversight of this project. The project managers will write
quarterly progress reports and send them to the Department of Technology, executive management and external independent
project oversight company. So, when the project runs into problems during development, the
stakeholders will be aware, however, the stakeholders are not experts in software development
and other than approving the expenditure of more money
to hire additional developers and to lengthen the project schedule, there will not be much they can do to fix the problem and
I think most of know how well adding developers to
a late project actually works. As you might be able to guess, I am not a fan of this approach
for software development. I’d now like to share
the alternative approach to software development that we use in our department and this
is not a fair comparison. Our department is much smaller and not similar to other departments. First, we try as much as possible to avoid building large systems. So, stake holders request
a large system, we discuss the situation with them
and try to convince them to build a smaller system
initially that only does what it most important. In some cases we are successful. In some cases we aren’t, but we always try to reduce the scope if a
large system is requested. We can always add more features later after the system is working. That is what we tell people. The idea, you should start with a working, simple system before expanding the complexity, instead of trying to build this complex system from the start that’s been around for decades. John Gall wrote a book called Systemantics back in 1975, which
discusses how systems work and how they fail and
a couple of sentences in that book eventually became referred to as “Gall’s Law” and Gall’s Law states that “a complex system that
works is invariably found “to have evolved from a
simple system that worked. “A complex system designed
from scratch never works “and cannot be patched up to make it work. “You have to start over with
a working simple system.” And not that Gall was not referring to software development. He was discussing systems in general, be they computer based,
mechanical or manual. So, we have Gall’s Law in mind when we build systems in our department. In most cases we try to add them as subsystems or new features to our existing working systems and we aim to complete the initial development on almost all projects in under a month with the majority being completed in about a week and when
I say completed, I mean having all features implemented with complete automated tests. Now, the systems may take longer to go into production due to delays in getting final approval from the stakeholders that
requested the system, but my experience, stakeholders are quick to approve a project, like
go develop this project, and are less quick to, after the system is built, putting it into production. A typical project for us would be to take a paper form based process with one or two levels of review and integrate it into our
existing internet site. So, there would be a form for employees to complete, forms for
reviewers to approve or deny the request, emails for notifying the next reviewer at each step and reports for the employee reviewers in management to see the status of requests. We build all these systems internally using government staff
with no use of contractors. The developer responsible for building the system will meet directly with all interested stakeholders
in order to determine the system requirements. Often stakeholders will request features that are not strictly
needed or will request a specific feature that they think will meet their needs and
simply just telling us what their needs actually are. So, it’s the developer’s job to discuss the system requirements with
the stakeholders, figure out which features are important and what the underlying needs actually are and then try to build the simplest system that will meet the needs
of the stakeholders. The developer responsible for building the system knows that
they will be responsible for maintaining the system when it is in production, which
gives them an incentive to design the system in a way that will make maintenance easiest. The developer also knows that if the users have trouble using the system, they will be the ones to listen and respond to the users’ complaints and it will be their job to modify the system to make it easier to use and this encourages the developer to design the system to be easy to use so they won’t have to spend their time
making such modifications in the future. As the title of this
presentation suggests, all of our internal custom development is done in Ruby. Ruby has been our primary
development language for new projects since mid 2005 and all of our existing
internal customer systems were converted to Ruby by 2009. I’ll talk a little bit later about the stack that we use and how it has evolved over time. Now, we do not have a large amount of oversight during our
development process. The only external
oversight that we receive is from external security assessments and penetration tests
which are each performed about every three years. In terms of interim
oversight, the developer will work on the system until they think it is ready and if they run into problems during development, they’ll usually talk to me and we usually pair programs to try to fix the problem. After the developer thinks the system is complete and ready, they’ll request a code review from me, so
I’ll go through, I’ll review all the code and provide
the list of requests of changes and that process will repeat until all issues have been addressed. After a clean code review, the developer will notify the stakeholders who will use a development version of the system and see if it meets their needs. Now, in some cases, developers or the stakeholders may request changes at that point, in which case the developer will discuss the system
requirements with them, agree on the necessary
changes and that process again may repeat until all
issues have been addressed. Before any of our systems go into production, we
require automated testing at both the model level and above level for all parts of the system. We perform coverage
testing on a regular basis for all of our applications
and our line coverage is between 93 and 100%,
depending on the system. Since we prefer to build smaller systems before larger systems, one of the issues that we deal is, we have
all outside requests, we have to prioritize them to determine which ones to build first. So, one of the things we consider is the size of the request. We try to build smaller systems before larger systems and hopefully that encourages the stakeholders to be more willing to accept building a base smaller system. Another thing we consider is how often a system will be used. If the system automates a
manual paper base process, we will ask how many forms
are submitted per month and depending on the answer, it may give that system high priority
or even tell the requester that due to low volume,
it doesn’t even make sense to automate this process. We also try to consider how important the system is to the organization. If the current process is paper based, we will ask what are the
consequences of losing a form and if there are legal issues, we may want to automate a system
even if it has low volume just to ensure that we
can closely track progress to make sure that we
are following the law. And finally, we consider who
is requesting the system. As much as we like to be egalitarian, the requester is an
execute and they want us to give the system priority, the system is going to get priority. So, what do we use Ruby for? I mentioned earlier that Ruby is the sole language that we
use for software development, so this basically boils
down to the same question as what software development do we do. As you may expect, the
primary software development that we do is web applications. Our largest application
is our intranet site called the Hub and the
majority of our development is adding new features and
subsystems to the Hub and the Hub has some pretty
standard internet features. It has a lot of information
on our internal processes, has our comprehensive manual,
has our employee directory. Each employee has a profile
page showing information about the employee such as their division, their position, audits
they worked on, audits they’re currently working on, any awards they received. We put for this profile page, also has a link to a map of the
floor that they work on with their desk location highlighted and this makes it easy for new staff to navigate the office
and for existing staff to easily find new staff. Most of the new development centers around automating existing processes. When I started back in 2000, almost all processes were manual
paper based processes and now, most of the common processes are automated via online forms. For example, taking, submitting requests to take time off, attend
training, conferences, get reimbursed for
overtime, purchase supplies and equipment and many
others are automated. Records are kept so that employees can see the status of all
of their previous requests. Most of the processes have custom review and approval work loads that are based on different requirements. Some processes, the simple ones, will only require supervisory review, but in other cases,
there’ll be in between two and five levels of review, with the number of levels dependent on the specifics of the request and the employee’s
position in the department. Another web application that we developed is our recruiting system which is split in two parts, an
externally accessible part that appears to be part
of our public website and an internal system
for human resources staff. And most employees that we hire are entry level auditors directly out of college or graduate school and most of the recruiting system is designed to handle
the recruiting process for these applicants. The recruiting system
allows perspective auditors to apply to take our online exam and after applying, our
human resources staff will review the application
and if they approve it, the applicant is notified that they can take our online exam. The online exam is timed and has 75 multiple choice questions and you have to get about 80% of the questions correct
in order to rank highly enough to advance. The exam is actually fairly difficult and only about 30% of applicants do rank highly enough to advance. Assuming the applicant scores high enough and can advance further, they are notified that they can take our
online writing assessment. When the applicant begins
the writing assessment, they’re given a prompt and an upload form and they have a couple of hours to write a writing sample and submit it to us and this writing assessment is graded by our in house editor and only one third of applicants score
highly enough to advance. After that, there’s a phone interview and then there is an in house interview. The system handles all of the information related to those. It also handles all
the internal work flows related to the processing
these applications and it has extensive
reporting capabilities and there are smaller subsystems in the recruiting system
that handle recruiting for more advanced auditing positions. Another major web
application that we developed is our recommendation system. In addition to reporting
our audit findings, one of our primary functions
is to make recommendations to improve government and the departments that we audit are required to respond to our recommendations on a regular basis on their progress implementing
the recommendations until the recommendations
have been fully implemented. The recommendation system is split into three parts. The first part is externally accessible and it allows other government departments to submit responses to our recommendations through our website. There’s an internal part
that allows our staff to add recommendations and to review the recommendations
that have been submitted by the departments. Each of those responses
goes through four levels of review and after being fully reviewed, the departments’ response
and our assessment of their response is posted on our website and this holds other
departments accountable and their implementation
of our recommendations is often considered when
the legislature reviews the department’s budget. We continue to followup
on these recommendations that we make to departments
for up to six years after the release of our audit report. The third part of the
recommendation system is externally accessible and it allows the legislature, the press and the public to subscribe to be notified
about new report releases and new responses to
these recommendations. The system also allows
subscribing to be notified about things that are filtered
to specific policy areas. The last major web
application that we developed is our public website, which
like our recommendation system is also split into three different parts. Over 99% of the content
on our public website is generated by an internal system that just caches static pages. So, to update the content
on our public website, we just run a webcrawler
over our internal system which caches all the pages and then those web pages are just copied into our public web servers via our sync whenever we want to update content. We only have about 20,000 pages in our public websites and this approach is feasible for us. So, having almost all of our website consist of static pages is very helpful from a reliability standpoint. So, in the weird cases
where we have had problems with with the dynamic sections
of our public website, our users generally
don’t even notice because they are only accessing
the static content. Now, there is a small dynamic system that runs in our public websites which handles a few actions, mostly some different search features and there’s also a small internal
administrative application that we use for adding
reports to the website. As I mention, we use
Ruby for all development, so I’d like to talk a little bit about some of the non-web applications that we develop. For employees to log in to any of our internal web applications, we want them to use the same Windows user name and password that they use to log on to their computer because
we don’t want to have them remember a separate
user name and password. From a security
perspective, you don’t want your web servers talking directly to your Windows domain controllers. So, all of our web
applications authenticate by using SSL to connect to a
custom authentication proxy that is written in Ruby. The web applications submit the user name and the password provided by the user. The proxy first checks that the user is in a list of allowed user
names and then connects to Windows using LDAP over SSL in order to authenticate the password. We also use Ruby to implement
two factor authentication for our VPN, so we use Open VPN which does not have native support for
two factor authentication. (coughs) Sorry. After Open VPN authenticates
their certificate, we have Open VPN run a
statically compiled C program which uses a Unix socket to connect to a server written in Ruby
and that Ruby server then connects to Windows
using LDAP over SSL to authenticate the
password for the user name that is specified inside
the VPN certificate. This approach is a little bit more secure than the standard approach for doing two factor authentication,
so in most cases, the two factor authentication is done with user name provided by the client. The system uses that
approach for authenticating where they use the user name provided by the clients. An attacker can compromise
one employee’s VPN certificate and a different employee’s
password and use the two to gain remote access. So with our approach,
an attacker would have to compromise the VPN certificate and the password for the same employee and because we have about 200 employees, that’s hopefully about
200 times more difficult. We use Ruby to download
financial information from the state’s
mainframe on a daily basis for usage by our financial auditing team. This used to require a gem that can do FTP over SSL, but thankfully, after Ruby 2.4 was released, they were able to switch using NET FTP from the standard library. We develop custom programs
to assist auditors with their audit work. Most recently, we used Ruby with Capybara, Selenium
and Headless Chrome to download almost 3000 Pdfs from a separate government website
that was written in and required job descriptions to work. We then used Ruby with Pdf and Pdftk to extract the specific page from all these 3000 Pdfs, combine
all of the extracted pages into a single Pdf so that the auditors could go through the results easily and include the data in
their audit work papers. We use Ruby in conjunction
with a Microsoft program named AccessEnum to produce reports on file access permission and changes in file access over time, so the former are reviewed annually with management to determine if the
permissions are appropriate and the latter are reviewed monthly to make sure no obvious security issues have been introduced. We use Ruby to check
the remaining free space on our web servers on a monthly basis to see if more space needs to be allocated on any of them. And for critical file servers that are prone to using up all the space, we have a similar program written in Ruby that automatically notifies
the appropriate manager whenever the free space falls
below a certain threshold. We use Ruby to check that our auditors are complying with our
data retention policies by scanning for material
related to released audits that the staff are no longer
supposed to be retaining. We have many reporting
programs that use Sequel to connect to internal
post SQL and Microsoft SQL server databases and create report for it. So basically, Ruby is
our programming toolbox and pretty much everything
that we need to build, we can easily build using Ruby. How did we start using Ruby? When I was first given the task of maintaining our websites back in 2003, they were developed as static pages using a product called
NetObjects Fusion and while I had no previous professional
programming experience, I did have some exposure to PHP. I decided to use that. In late 2004, I heard about
Rails and I tried it out and I thought it was a great improvement over the spaghetti PHP that I was writing at the time, so after a few months of using Rails on personal projects and a few more trying it
out at work, I switched our internet site over
to Rails in mid 2005. Now, in 2008, I learned about Sinatra and I was drawn to Sinatra’s
much simpler approach to web development. I started using Sinatra for
all of our new development and the initial versions
of our recruiting and our recommendation systems
were both written in Sinatra. In 2014, I was exposed to
the routing tree approach used by Cuba and I saw how it addressed the complexity issues
that we were experiencing in our Sinatra applications
while still being much simpler than Rails. I ended up creating a fork
called Roda and converting all of our applications to Roda before the release of Roda 1.0. On the database side, when
I was using PHP, I wrote all the SQL by hand
using basically raw SQL on the database driver. When I first started using
Rails, I used ActiveRecord which I think was a huge
time saver in comparison. After being exposed to
Sequel in 2008, I saw the benefits of Sequel’s
method chaining approach to building queries. I converted all of our active record usage to Sequel that year and
we’ve been using Sequel exclusively ever since. So, in the lower levels
of the stack, we’ve always used post SQL in
the database starting with version 7.1. The operating system
has always been OpenBSD. Since we were already using
OpenBSD for our firewall, we had experience with it and security and simplicity are more
important than performance in our environment. For the web server, we
originally used Apache WEBrick and then lighttpd and SCGI and then NGiNX and Mongrel
and then finally NGiNX and Unicorn which we’ve
been using successfully for many years. On the testing side, we use
Minitest, Minitest-hooks, Rack-test and Capybara for testing. All the web applications
are explicitly designed around a very simple web 1.0 experience, but we do use some HTML5 features such as required input, data inputs. We avoid using JavaScript
as much as possible, only using it when there
is no other way, we absolutely must have dynamic
behavior on the web page. Most commonly, this is for input forms that can accept an arbitrary
number of line items. We use, it’s a little
JavaScript that we manually test the JavaScript that we do use whenever we modify the related code. Now, all of our applications run the same stack described here and use the same library versions
and that makes it much easier to switch between applications
during development. Whenever we wanna upgrade a
library, we run one command which runs the tests for
all of our applications using the new library
version and we can see if anything breaks. This command can also be used to test on multiple versions of Ruby and all of our applications pass our test speeds on Ruby 2.3, 2.4 and 2.5. So, after upgrading
libraries, we may decide that we want to use the new features that have been added to the libraries and this is generally done starting with our simplest application and then applied to our remaining applications
in order of complexity. A couple of examples of
this are when we switched to using frozen Sequel datasets
and databases and models and when we started using
Refrigerator to freeze of the Ruby core classes at one time. Now, when you choose to
use Ruby without Rails, you are probably going to limit
the number of developers that you can hire that
already have experience with your stack. For some companies, it may
be easy to find developers that already know Rails, however, we attempted
to recruit developers at many different experience levels and specifically highlighted the fact that we use Ruby and we did not even have a single person apply with
any Ruby or Rails experience. So, Rails’ popularity advantage
really doesn’t matter to us. Now, the good news is in
our limited experience in Roda and Sequel based
web stack, it’s easy for new programmers to learn. That is unwise to extrapolate
from a sample size one, but our current developer had no professional programming experience and had never programmed in Ruby before we hired her. She was able to quickly become productive and implement new features using Sequel and Roda and I think
if we were using Rails, it probably would have taken
her substantially longer in order to become productive. Now, I am the department’s
Information Security Office, so, one specific focus
here for me is security. As you would expect, we
try to protect against the common vulnerabilities
in web applications. We have to mitigate cross site scripting using Roda and UNB to
automatically escape output in our templates. We protect against cross
site request forgery by using Roda’s route CS Rep plug in and enforce the use of path
and action specific CSRF tokens for all requests that would modify state. Many security vulnerabilities
in root web applications stem from the use of
unexpected parameter types that are submitted by an attacker. To try to protect against these
unexpected parameter types by using Roda’s typecast params plug in and Sequel’s type conversion to ensure that all parameter inputs
that we’re accessing are of the expected type. We protect against SQL
injection by using Sequel to construct all of our queries. We do not have any raw
SQL usage at one time in any of our web applications. The user restricted
content security policy in all of our applications to mitigate possible browser issues
in case an attacker is able to exploit a cross
site scripting vulnerability. Now, we go further than that. We take a defense in depth approach to our web application security. A lot of our security planning starts by assuming there is SQL injection or remote code execution vulnerability in the application and thinking about ways of implementing features that would make the vulnerability more
difficult to exploit and to limit the damage it can do. So, all of our applications run with separate operating systems users with reduced privileges and with separate database
users per application. With the applications with
public facing components such as the public parts
over recommendations and recruiting systems, those weren’t as separate operating system users with fewer privileges and
with separate database users that are only granted the
minimum access necessary to perform the public facing functions. So, if there was a vulnerability in our public recruiting system, an attacker may be able to exploit it, but they would not be able to use it to change their exam or their writing
assessment scores, because the database user does not have the necessary privileges to do that. Likewise, a vulnerability in our public recommendation
system, could only be used to add responses. It could not be used to update or modify existing responses. We use security definer database functions in our systems to grant specific types of database access. For example, password
authentication for users in our recruiting system
uses a database function that accepts the password and returns whether the password
matches the stored cache, but the database user does not have the ability to access the
password hash directly and therefore, you cannot
export the password hashes to perform an offline attack on them. We also use security
definer database functions in our tests when setting
up database states, when the database user does not have the necessary access to do so, and this allows for fully
transactional testing, even when the different
database users are used. We want all of our applications on our own hardware in isolated subnets with strict ingress, egress
and loopback firewall rules that limit types of connections to be made to and from the servers and we use pro-operating
system firewall rules. So, operating system users for
our internal web applications are allowed to connect to our
custom authentication proxy with applications for our
externally accessible applications are not allowed to do that. They’re restricted by the firewall. To mitigate arbitrary file access and remote code execution
vulnerabilities, we’ve run all of our applications chrooted. Our applications are started as root and after the application is loaded, but before it starts
accepting connections, it drops privileges, well first it chroots to the application’s directory
and then it drops privileges to the application’s
operating system user. So in addition to limiting access to the application’s folder, we also use file access permissions
to ensure that hackers cannot read any sensitive
integration files that have secrets or other
sensitive information. Chroot is a very helpful security feature, but it has some problems
using it with Ruby and in Ruby, the main
issues with using chroot are that it does not work
well with run time requires and Ruby’s autoload feature specifically is a type of run time require. The use of autoload has
been strongly discouraged for many years, but popular Ruby libraries such as Rack and Mail still use autoload and that complicates their
usage in chroot environments. Now, to make it more
difficult to execute blind return oriented program
attacks that are based on exploiting consistent
memory layouts, we had each of our Unicorn
worker processes exec after forking, so all of
our Unicorn worker processes have unique memory layouts. This is a fairly large memory cost, but we already modeled our applications on a left over server with
only 256 gigabytes of ram. We’re currently using about four gigabytes for all of our applications, so for us, the additional security is
worth the extra memory cost. Finally, to make
exploitation more difficult in general and to reduce
the kernel attack surface for privilege escalation, we limit the set of allowed kernel system
calls for the applications to a minimum that they need to function. So, none of our web applications processes are allow to fork, exec or send signals to any other processes and
if the application process does not need to accept uploaded files, it is also restricted
from modifying or creating any files at all. Combined, these security controls make it more difficult to successfully exploit our applications and
make it more difficult to use a successful exploit in order to attack other applications and systems. There is definitely a trade off and some of these restrictions
complicate development and cause problems with certain libraries that you need to work around. This also required modifications to a few of the libraries to allow them to work in chroot and fork plus exec environments. After implementing these defense
in depth security features and using them in production
for the last 18 months, here are my recommendations on whether to consider implementing them. Consider whether you have
data that is worth protecting. How sensitive or confidential
is the information that you deal with. If you’re dealing with an automized data or public datasets, maybe the data does not warrant the use
of the security controls. Consider who can access your application. You’re only designing an application for internal organizational use and will not accessible from the internet, then the risk of attack is lower and maybe in that case, your effort is better spent on providing an improved user experience. Consider how much control you have over your application’s environment. If you’re running on your own hardware or virtual machines, you may be able to use most or all of
the security features. If you’re using a platform as a service provider, your options are going to be limited into what
the provider supports. If you application is
accessible from the internet and contains sensitive
or confidential data, my recommendation would be to first look at using multiple database users, restricted database permissions and security definer database functions to limit the possible risk of exploited SQL
injection vulnerabilities. If you have the ability to
configure firewall rules, I would recommend doing so. The initial implementation is fairly easy and the ongoing maintenance costs are low. If you have memory to spare
and increase memory usage is not a problem, then consider
implementing fork plus exec. Now, if your application or the server it is running has any special access to any other applications or systems that are not accessible from the internet, or security is a high priority compared to the ease of maintenance
and you’re running on your own hardware or virtual machines and you can consider priv drop, chroot and or system call filtering. I’d like to finish this presentation with a few opinions and first, my opinion on what a successful
government IT project needs. It needs executive management that is willing to try new approaches. Grace Hopper, one of the
creators of Cobol stated that “the most dangerous phrase in “the language is, ‘we’ve
always done it this way.’ ” Considering the track record
for government IT projects have been delivered late, over budget and full of bugs, if
they work at all, I think that new development
approach is worth trying. Next, the project managers need a deep understanding
of the technology used. Let them know what
problems the systems has and how to fix those
problems as soon as possible. The developers building the system should be discussing the requirements directly with the system stakeholders with an eye to reducing the scope of the system to the minimum. The developers building the system should be responsible for maintaining the system when it is in production. This encourages them to
make maintenance easy and design a system to be easy to use, they don’t have to make
any modifications later. And finally, all systems should start by developing the simplest possible system that handles the most important need. Making that system work well and gradually adding features to the system to not run afoul of Gall’s Law. My second opinion comes from a lot of personal experience and that is that Ruby is a great fit to build types of systems the government needs, at least for those systems where
the one time performance is not critical. Ruby makes it very fast to develop the systems quickly and
gets to a working state. Soon you have good tests that cover most of the system’s functionality. It’s fairly easy to maintain
Ruby web applications, modify them as requirements change, though a lot of that does
depend on your choice of Ruby libraries. My experience is that
external requirement changes cause much more code changes
than refactoring changes, so, the limited ability to
use automated refactoring in Ruby is not a major issue. Ruby is easy to learn and
easy to teach new programmers and this is especially
important for government work, since in my experience
at least, applicants for government programming
positions will probably not have much experience with Ruby. Importantly, Ruby keeps programming fun. I think the importance of this
is definitely under rated. Government work does not pay very well, so having a fun language to program in can help retain count in staff. After being responsible
for all programming for a government department
for the last 15 years, first as a developer
and now as a manager, I can confidently say that
Ruby is a great choice for application development
and I highly recommend it. And that concludes my presentation. I’d like to thank all of
you for listening to me. If you have any questions,
I have 45 seconds. (laughing) (applauding) (rushing) (vibrating slamming)

2 thoughts on “RubyConf 2018 – Running a Government Department on Ruby for over 13 Years by Jeremy Evans

  1. Jeremy is a real cool dude. Goes out of his way to help random strangers, like myself, with our silly little problems 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *