GSoC’ 24: My journey as a contributor at Learning Equality

This is a guest blog post written by Kshitij Thareja. Enjoy!
Hello and welcome to my very first blog post! So, recently I got selected as a GSoC (Google Summer of Code) contributor at Learning Equality for the project: Integrate visual testing with Kolibri Design System’s CI.
Learning Equality is an education technology nonprofit that develops and maintains Kolibri, an offline-first learning platform that runs on a variety of low-cost and legacy devices. Kolibri Design System is a resource for designers and developers who are building Kolibri products. It includes the design system patterns and the library of UI components.
Now that the coding period is about to end, I thought it would be good to document my journey so far, including everything I’ve worked on in the last two months. Also, I consider this writeup to be the starting point of my blogging ambition :)
Before going into the details, I’d like to briefly introduce what’s GSoC for those of you who aren’t aware of it. It’s a program by Google that’s focused on bringing new contributors into open-source software development which introduces you to a new paradigm about building code collaboratively. Not only that, GSoC is a platform which lets you build on your current skills and hone them. Multiple organizations participate in this program every year, bringing quite a good number of interesting projects to the table. Contributors can then choose to contribute to any organization and project of their preference. Your selection depends on how good and promising your proposal is and how appealing it is to the mentoring organization.
I’ll be describing the finer parts of my story, from choosing the right organization to my selection, in a separate post. What I’ll be focusing on in this post is the work I did since the beginning of the standard coding period.
If you’d like a quick look at my work related to this project, feel free to check out this pull request: Introduce visual testing mechanism to KDS
May 1, 2024- The day that started it all
I had been quite anxious during the month that followed proposal submission, waiting for the result. And then, when the day had finally arrived, my tension levels were off the charts. Once the clock pointed at 23:30 hours, I quickly grabbed my laptop and began searching for the results on GSoC website, but to no use. Then I waited for some time, only to receive an official mail from Google Summer of Code, the subject of the mail being-
GSoC 2024: Congratulations, your proposal with Learning Equality has been accepted!
Well. I would just say that it feels great to finally achieve something you’ve worked hard for.
Community bonding period
From May 2, 2024 — May 26, 2024, we had a community bonding period to facilitate interaction with the assigned mentors and to have short discussions on the project. That was when I first interacted with Blaine Jester and Alex Velez, who mentored me throughout the duration of the project. We had discussed about my proposal earlier on Slack, where Blaine motivated me to explore possibilities to add more value to my proposal, something which I’m really grateful for. In our first meeting after my selection, we discussed more about my proposal and had some casual conversation.
During this period, I started diving deeper into the finer aspects of the project. I went through documentations of Percy (a SaaS product that is used to take snapshots of webpages and check if there are any visual diffs by comparing it with the baseline snapshots) and Puppeteer (an open-source library for Node.js that automates and simplifies web development and testing). I had previous experience with unit testing, but visual testing was something new for me. I went through some blogs to get a gist of it and started working on the implementation plan for the project with inputs from my mentors.
A little info on visual testing:
Visual Testing is used to verify that the user interface is presented correctly to all users. This is an extremely important process, as software applications and websites need to look just as good on a wide variety of devices and browsers.
It ensures that each element on the page appears in the right shape, size, and position, allowing for detection of any discrepancies between what the users expect to see and what is shown on screen. It ultimately ensures that a product meets usability standards and is fully optimized across platforms.
For KDS, it can greatly help in validating any UI modifications to the design system’s components via automated test workflows. This way, we can maintain quality assurance, reduce the manual effort required to validate UI modifications and boost developers’ confidence in their pull request modifications.
First half of the project duration
From the time period between May 27, 2024 — July 8, 2024, I worked on setting up the basic configuration for integration of Puppeteer and Percy with the existing Jest test suite (which was being used for unit-tests). This proved to be challenging at first as unit-tests used ‘jsdom’ environment whereas for Puppeteer, a ‘node’ environment was needed for running tests in the browser. Setting everything up from scratch would have required a lot of custom setups, which would have been very time consuming. I countered this issue by making use of the Jest-Puppeteer package, which provides all required configuration to run tests using Puppeteer along with Jest. There were some changes in the existing setup file and jest config for unit-tests and another set of these files were made for visual-tests.
Next, I worked on writing some functions necessary to ensure proper functioning of the development server used for visual testing before the actual tests are run. This included server checks like starting the server, waiting for it to load, checking if the testing page has loaded, validating if the required environment variables are set, etc.
Now, the visual-tests for all components were meant to be written in the same files as the corresponding unit-tests. So, to toggle between the tests to be executed based on the test mode, I had to define a utility function: ‘canTakeScreenshot’ which was used to identify the test type and execute the required test blocks accordingly. Also, for visual tests we need to render the required component on a separate page and then take the snapshot. To facilitate this dynamic rendering during the test runs, I made a separate page called ‘testing-playground’, which receives messages from the test runner and renders the required component based on the body of the message. This way, I had a basic component rendering and snapshot mechanism in place which worked perfectly and as intended.
You can see all of my work during this period here: Percy and jest-puppeteer environment setup for visual testing
Midterm Evaluation
From July 8, 2024 — July 12, 2024, I had my midterm evaluation period where all of my work from the last one month was evaluated, and both my mentors and I filled the respective evaluation forms. Again, it was pretty exciting (and maybe terrifying, idk how to explain...) and I was waiting for the results which were scheduled on July 13th if my memory serves me right. Anyways, I passed, so yay!!!

Moving on to the second half
So, after the midterm evaluation was over, we already had a working visual testing mechanism. Next up on the list was improvements and further additions to the existing setup. We started off with migrating some part of the server startup script to use the package: ‘concurrently’, which handled the server startup and shutdown very well and help us remove extra code required to kill all the processes initiated by the server manually. This was done on Blaine’s advice, which made the setup look cleaner and remove some other dependencies.
You can check out the changes mentioned above here: Replace custom checks from the server script to use concurrently
Now, to make writing visual tests easier for developers and hide any unnecessary implementation details from them, we worked on abstracting some of the functions used for rendering the components and separating unit and visual tests. We shifted from the ‘canTakeScreenshot’ function to using custom ‘describe’ and ‘it’ test blocks for visual tests. So, the new tests would be written like:
describe.visual('KButton Visual Tests', () => {
it('renders correctly with default props', async () => {
await renderComponent('KButton', { text: 'Test Button' });
await takeSnapshot('KButton - Default');
});
});
Using ‘describe.visual’ appends a [Visual] tag to the test name based on which, the tests to be executed are determined using a regex pattern (that checks if the [Visual] tag is present in the test name if visual tests are to be executed, and checks for its absence if unit tests are to be executed) during runtime. This was again something both me and Alex thought about, and Blaine helped connect both our ideas into one. From the above code sample, you can see how easy it has become to write a simple visual test.
The code for the above-mentioned part can be checked here: Add abstraction logic for simplifying writing visual tests
This marked the completion of local visual testing setup. Now I had to integrate this to KDS’ GitHub workflows. For this, I took the reference of unit testing workflow which was already present as most of the setup was same. The visual tests would be executed normally as a part of the workflow and the Percy build URL would be taken from the logs. Then, an automated comment will be made to the PR, containing the Percy dashboard link to ensure ease of access for reviewers. There were some issues with the workflow when we tested it in the KDS repository, which were related to the repository’s security settings and missing browser installations which were later taken care of.
You can check out the PR’s related to setting up the workflow here:
Introduce visual testing to the existing JavaScript tests workflow
Add puppeteer config for CI environments for running visual tests workflow
[Visual Testing] Update comment job in workflow to use custom script
And the final part… The documentation for the newly implemented visual testing service. I tried to make the documentation as extensive as possible, with inputs from my mentors and other members of the organization. Overall, this documentation is designed to make it easy for new contributors to understand the implementation and write visual tests for various components.
The work on the documentation can be accessed from this PR: [Visual Testing] Add documentation for the testing mechanism
Wanna see how visual tests look like? I’ll share a sample screenshot of a Percy build:

Final thoughts
Well, we’ve almost reached the end of GSoC period. The past two months have been a mixture of excitement and learning. I’ve kinda become used to the weekly meetings with my mentors and the work-feedback-work loop. While working on this project, I was able to constantly improve myself in a way that I now know the importance of every aspect in the development of an idea. I am really thankful to my mentors for being a constant support throughout this journey. Their ideas shaped the outcome of this project in a positive way and pushed me to think and research more so as to arrive at the best possible solution.
I hope to continue working with Learning Equality after this project is officially over. Excited for whatever comes next!
Thank you for joining me on this journey and stay tuned for more updates! Until then, Goodbye :)