Professional Self-Assessment
My reflection on the program, the portfolio, and what I learned while building it.
My name is Christopher Prempeh, and this is the professional self-assessment for my Computer Science Capstone at Southern New Hampshire University. I came to SNHU after earning my associate degree in computer science from Rowan-Cabarrus Community College, which means I have been studying computer science in some form for about four years. The time at SNHU in particular pushed me further than I expected. The upper-level courses were not just more of the same. They forced me to think about software as a system rather than a collection of files that happen to run, and I think that shift is what the ePortfolio I am introducing here is really about. What I have put together on this site is not a highlight reel of assignments. It is a demonstration of how I think about building software at this point in my education, and it is the best picture I can give of the kind of work I want to do professionally.
Throughout the program I have had the chance to work in small collaborative settings where communication about code mattered as much as the code itself. Group discussions in CS 250 on the software development lifecycle taught me that requirements change, teams change, and the value of clean documentation shows up long after you think you are done with a feature. My code review in this capstone is meant to reflect those lessons. It is not me performing a perfect walkthrough. It is me showing that I can analyze existing code, call out real weaknesses honestly, and explain my reasoning in a way another developer or a manager could actually follow. That skill carries over to stakeholder communication too. Writing the three enhancement narratives for this portfolio taught me that explaining a technical decision to someone who was not in the room for it requires removing jargon without losing substance. You have to respect the reader's time and still be accurate, which is harder than it sounds.
On the technical side, data structures and algorithms clicked for me the most clearly during CS 260 and again during my second capstone enhancement. For the second enhancement I built a TrendCalculator that turns a stream of raw weight entries into meaningful trend analysis. The original version of the app collected data but did nothing useful with it. The enhanced version uses a TreeMap to keep entries in chronological order, a sliding-window algorithm to produce O(n) rolling averages, and a threshold-based alert system that compares weekly changes against a healthy range. I chose TreeMap over HashMap on purpose, and I wrote the narrative to explain that trade-off out loud, because design choices are the part of the work I actually enjoy. I like understanding why one data structure fits the shape of the problem better than another, and I like writing code that shows that thinking is happening behind the scenes.
Software engineering and database work came together for me in this capstone in a way that I do not think any single course could have produced on its own. The first enhancement refactored my CS 360 mobile app from a monolithic Activity into a layered MVVM architecture with a Repository as the seam between the ViewModels and the Room database. That restructuring taught me something I had read about many times but not actually felt until I did it, which is that good architecture is mostly about making future changes less painful. The third enhancement proved the point. Adding a Goal entity, a userId foreign key on every weight row, a composite index, and salted password hashing touched every layer of the app, but because the layers already existed cleanly from the first enhancement, the changes were mechanical rather than chaotic. I also built a Robolectric test suite for the database layer so I could verify foreign key enforcement, user-scoped queries, and goal ordering without needing an emulator. Outside the WeightTracker artifact, my CS 350 coursework with the Raspberry Pi gave me hands-on experience with GPIO, PWM, UART serial communication, and a Morse code state machine, along with an AHT20 temperature sensor that pushed readings to an LCD display. That work was a useful counterpart to the mobile-app focus of my ePortfolio because it forced me to think about timing, hardware interrupts, and low-level state management, which is a different kind of engineering discipline.
Security has probably been the area where my thinking has changed the most over the program. When I started, I thought of security as a separate topic you studied in one class and checked off. CS 405 changed that for me, and my third capstone enhancement made it concrete. The original CS 360 app stored passwords as plain text, which is the kind of thing you do not notice as a student because everything still works on the surface. Replacing that with salted SHA-256 hashing through a dedicated PasswordHasher class, using MessageDigest.isEqual for constant-time comparison to reduce timing leakage, and adding foreign key constraints with cascade delete to prevent orphan data all felt like basic hygiene rather than advanced security work. That shift in perspective is what I think a security mindset actually means. It is not about being paranoid. It is about seeing where data can be misused or corrupted and designing with those risks in mind from the beginning.
My work experience in tech has reinforced many of the same ideas. I have spent time in environments where the difference between code that technically works and code that holds up in production comes down to whether anyone can read it six months later. That is part of why my capstone enhancements focus so heavily on structure, documentation, and tests rather than flashy features. I would rather demonstrate that I can build something another developer could inherit and extend than demonstrate that I can cram five new features into an already-crowded file. The jobs I want next are the ones where that way of thinking is rewarded, and the ePortfolio is meant to be evidence that it is already how I work.
The artifacts in this portfolio all come from the same source project, which was intentional. Using my CS 360 WeightTracker Android application for all three enhancement categories lets me show growth across software design, algorithms, and databases on a single system rather than showing isolated slices of three different projects. The first enhancement is the architectural foundation, turning an Activity-heavy app into a proper layered system with MVVM, a Repository, ViewModels, LiveData, single-consumption events, and centralized input validation. The second enhancement adds the TrendCalculator and its sliding-window algorithms, plugging into the architecture from the first enhancement without disrupting it. The third enhancement redesigns the database schema into the three tables the original CS 360 requirements actually called for, adds foreign keys and indexes, secures password storage, and wires user ownership through every layer. Taken together, the three enhancements tell one story about taking a basic class project and turning it into something that feels closer to a real application, with the trade-offs and decisions spelled out in the narratives. That is the full range of what I want this ePortfolio to communicate, and the pages that follow are where you can see the work itself.
Informal Code Review
A walkthrough of the original WeightTracker, the weaknesses I found, and the enhancements I built on top of it.
The Artifact
WeightTracker is an Android app originally built in CS 360. Every enhancement in this portfolio is applied to this single codebase.
Original
WeightTracker (CS 360)
The project as submitted for Mobile Architecture and Programming. Functional, met the CS 360 requirements, but had structural, algorithmic, and database gaps that opened the door to this capstone.
Enhanced
Weight Tracker 2 (Capstone)
My rebuilt version. It adds MVVM architecture, sliding-window trend analysis, a three-table schema with foreign keys, salted password hashing, and a full suite of unit, Robolectric, and instrumented tests.
Three Enhancements
One artifact, three perspectives. Each enhancement touches a different area of computer science practice.
Software Design & Engineering
MVVM refactor
The artifact I selected for this enhancement was my WeightTracker Android application from CS 360: Mobile Architecture and Programming. This app was originally created during that course and was designed to let users create an account, log in, and track their daily weight entries in a scrollable list. The data is stored locally using Room and SQLite, and the app also included an SMS permission screen for goal notifications. At the time, the project met the class requirements and functioned the way it was supposed to, but after revisiting it for CS 499, it was clear that there were several areas where the overall design and structure could be improved.
I chose this artifact for the software design and engineering category because it gave me a strong opportunity to show real improvement in both code quality and application structure. The original version worked, but the way it was built made it hard to maintain, extend, or hand off to another developer. A lot of the logic was packed into one large onCreate method, especially in DataGridActivity.java, where UI setup, database access, input parsing, date formatting, goal checking, and list management were all handled in one place. It got the job done, but it was not clean, scalable, or professional. That made it a good candidate for enhancement because I could take something functional and turn it into something much more structured and maintainable.
The biggest improvement I made was refactoring the app to use the MVVM architecture pattern. In the enhanced version, the Activities are responsible only for UI-related work, such as finding views, showing dialogs, and displaying toast messages. The ViewModels now handle screen logic, input validation, and communication with the data layer. I also added a WeightRepository to separate the ViewModels from the Room DAOs, which means the Activities no longer touch the database directly. This change made the code much easier to follow and gave each class a more focused purpose. The original project had 10 Java files, while the enhanced version has 18, with each file handling a specific responsibility instead of trying to do everything at once.
Another important improvement was adding single-consumption UI events. LiveData is useful for observing state changes, but one issue with it is that it can replay the last value after configuration changes like screen rotation. In this app, that could cause actions like a successful login event to fire again and trigger navigation a second time. To fix that, I added a SingleEvent wrapper class so that events such as toast messages and navigation actions are consumed only once. It is a small addition, but it fixes a real issue and shows a better understanding of how Android state management works beyond the basics.
I also improved input validation significantly. In the original project, validation was minimal. It mainly checked whether fields were empty, and for weight values it used Double.parseDouble() without proper exception handling or range checking. That meant invalid input could crash the app or allow unrealistic values to be saved. In the enhanced version, I created an InputValidator utility class along with a WeightParseResult type so that validation returns a structured result instead of throwing errors or silently accepting bad input. I also moved error message references into Android string resources instead of hardcoding them into the Java files. That helped keep validation more consistent and made the app feel more polished overall.
One of the most important fixes was implementing the missing update operation. The original app supported create, read, and delete, but not update, even though CRUD functionality was expected in the original course project. Users had no way to edit an existing weight entry once it was saved. To correct that, I added an @Update method to WeightDao, a setId() method on WeightEntry, an edit button for each list row, an edit dialog that pre-fills the current value, and the full update flow through the ViewModel and Repository. This completed the CRUD requirement and made the app more realistic as a usable tracking tool.
I also updated the RecyclerView implementation by replacing the original adapter logic with a ListAdapter backed by DiffUtil. In the earlier version, the delete listener captured a position value directly from onBindViewHolder, which could lead to the wrong item being removed if the list changed quickly. With DiffUtil, the list now updates more efficiently and more safely, because it calculates the minimal changes between old and new data. I also updated click handling to use getAdapterPosition() with a NO_POSITION guard, which eliminates the stale-position problem. On top of that, I standardized threading across the app so that all database work now goes through DbExecutors.io() by way of the Repository. The original version used different threading approaches in different places, which was inconsistent and harder to manage.
Another area I improved was testing and documentation. The original project had no unit tests, so I added JUnit tests for InputValidator, SingleEvent, and WeightParseResult. These tests cover cases like null input, empty values, whitespace, non-numeric values, and out-of-range data. That helped verify that the new validation logic works the way it should. I also cleaned up the documentation by adding Javadoc comments throughout the codebase. Instead of writing comments that sounded like assignment notes, I focused on making the comments explain the actual purpose of the code in a way another developer could understand.
This enhancement aligns most strongly with Course Outcome 4, which focuses on using well-founded and innovative techniques, skills, and tools in computing practices. The refactor introduced industry-standard Android patterns such as MVVM, ViewModels, LiveData, a Repository layer, and DiffUtil-backed list rendering. These are all techniques that make the solution more professional and closer to what would be expected in a real development environment. It also supports Outcome 2 because the narrative and the codebase itself both communicate technical decisions more clearly than before. Outcome 1 is supported through the improved maintainability of the project, since the refactored structure is much easier for another developer to follow. Outcome 5 is also supported through stronger validation and better awareness of issues like improper input handling and plain-text password storage. I identified the password issue as a remaining gap rather than pretending it was already solved, which reflects a more realistic security mindset. I do not think Outcome 3 is the strongest fit for this enhancement, since algorithmic problem-solving will be better demonstrated in my next enhancement when I add trend analysis and rolling averages.
The biggest thing I learned from this enhancement was how much easier development becomes when logic is separated properly. In the original app, adding even one new feature meant adding more code to an already overloaded Activity. After the refactor, building the edit feature felt much more straightforward because each layer had a clear role. I could update the DAO, expose the function through the Repository, validate the data in the ViewModel, and let the Activity handle only the user interaction. That separation made the whole app easier to think through and easier to improve.
One challenge that stood out to me was understanding the SingleEvent issue. At first, it seemed minor, but once I looked deeper into how LiveData handles configuration changes, it became clear that replayed values could create real bugs. That was a good reminder that it is not enough to know how to use a framework at a surface level. You also need to understand how it behaves underneath if you want to build something stable. I had a similar experience with DiffUtil. The original notifyDataSetChanged() approach worked, but it was a blunt fix. Moving to ListAdapter and DiffUtil made the updates more efficient and forced me to think more carefully about how identity and content are represented in a list-based UI.
I also learned that professional documentation matters, but only when it serves the code. My first instinct was to write comments that connected everything back to the rubric and course outcomes, but that is not how real source code should read. Once I rewrote the comments to explain technical intent instead, the project felt a lot cleaner and more professional. Looking ahead, the architecture I built here should make the next two enhancements easier to implement. The Repository and ViewModel structure already gives me a good foundation for adding trend analysis in the algorithms enhancement and expanding the database design in the next phase. That is really the whole point of good software design in the first place. It is not just about making the code look cleaner. It is about making future work less annoying, which is rare and beautiful in computer science.
Algorithms & Data Structures
TrendCalculator
The artifact for this enhancement is the same WeightTracker Android application from CS 360: Mobile Architecture and Programming that I have been working with throughout the capstone. The app lets users create an account, log in, and track daily weight entries in a scrollable list, with data stored locally using Room and SQLite. In Enhancement One, I refactored the app into MVVM with ViewModels, a Repository layer, LiveData, and proper input validation. That gave the codebase a clean layered structure that made this second enhancement a lot easier to build on top of.
I selected this artifact for the algorithms and data structures category because the original app collected weight data but didn't do anything useful with it. My CS 360 proposal described progress charts, visual trends, and goal tracking, but the finished product was just a flat list of numbers with no analysis or feedback. The goal check was a single line comparing a raw entry to a hardcoded 150.0 using ==, which does not work well because daily weight fluctuates too much for an exact match and == on doubles has floating-point precision issues. That gap between the data being collected and any real processing made it a good candidate for this enhancement.
The biggest piece of this enhancement is a TrendCalculator class that does all the trend analysis on the weight entries. It starts by loading entries into a TreeMap keyed by date string, which keeps them in chronological order automatically at O(log n) per insert and supports efficient date-range lookups through subMap(). From there it computes 7-day rolling averages using an incremental sliding window that keeps a running sum instead of re-scanning at every position, making the whole pass O(n). When there are at least 14 entries, it calculates week-over-week deltas by sampling the rolling averages at 7-entry intervals, and generates WARNING or CRITICAL alerts when changes exceed a threshold that defaults to 2 pounds per week. Below 14 entries, it suppresses the weekly data and falls back to comparing the first and last rolling averages for trend direction, so users still get useful feedback from their first week of logging. The final stage replaces the old goal check by comparing the rolling average against the target using <= instead of exact equality, and the ViewModel tracks whether the goal was previously reached so the notification only fires once on a false-to-true crossing instead of on every add operation.
All of the analysis results are wrapped in simple classes that are set at construction time and cannot be changed after. RollingAverage, WeeklyChange, TrendAlert, and WeightTrendResult capture their values through constructors and wrap the lists in Collections.unmodifiableList() so nothing downstream can accidentally mutate the results. Integrating this with the MVVM architecture from Enhancement One was pretty straightforward. I added a computeTrend() method to WeightRepository that fetches entries chronologically and runs the analysis, and DataGridViewModel exposes the result as LiveData and calls refreshTrend() after every CRUD operation. DataGridActivity observes the trend LiveData and renders a summary card above the RecyclerView with the rolling average, trend direction, and when there is enough data, the weekly change and alerts.
I also added a test suite with over 25 unit tests covering null and empty input, sorted map construction with out-of-order and duplicate entries, sliding window math, the weekly change suppression for fewer than 14 entries, alert thresholds, trend direction using both the weekly-change and rolling-average paths, goal comparison edge cases, a full-pipeline integration test with realistic data, and immutability verification. Because TrendCalculator has no Android dependencies, everything runs as plain JUnit without an emulator.
This enhancement aligns most strongly with Course Outcome 3, which focuses on designing computing solutions using algorithmic principles while managing trade-offs. The whole thing is built around that: the sliding window for O(n) rolling averages, TreeMap vs HashMap with documented trade-offs, the threshold-based alert system, and the trend-based goal comparison replacing exact equality. Outcome 4 is also supported through patterns like immutable result objects, pure computation separated from framework dependencies, and a design that makes the algorithm reusable. Outcome 2 is supported through the narrative and through code comments that explain design decisions. Outcome 1 gets coverage from the separation of concerns making it easier for another developer to follow. I do not think Outcome 5 is the strongest fit for this enhancement since the security-focused work will come in Enhancement Three with password hashing and database constraints.
The biggest thing I learned was how much good architecture matters when you are adding new features. After the MVVM refactor, the trend logic lives in its own class with no Android dependencies, and I never had to touch the list display or CRUD code while building it. The main challenge was handling the boundary between having enough data for rolling averages and having enough for weekly comparisons. My first approach showed weekly deltas too early from partial data, which felt misleading. Tightening that to 14 entries with a fallback trend direction made the UI more honest about what it was actually showing. I was also surprised how naturally TreeMap fit this use case. I originally planned a sorted ArrayList, but once I thought about date-range queries and subMap(), the trade-off was clearly worth it. Looking ahead, the architecture should plug directly into Enhancement Three when I add the Goal entity, since swapping the hardcoded goal for a database-backed value is a one-line change in the ViewModel. That is the whole point of building things in layers.
Databases
Schema and security
The artifact for this enhancement is the same WeightTracker Android application from CS 360: Mobile Architecture and Programming that I have been improving throughout the capstone. The app lets users create an account, log in, and track daily weight entries in a scrollable list, with data stored locally using Room and SQLite. In Enhancement One, I refactored the app into MVVM with ViewModels, a Repository layer, LiveData, reusable validation, and cleaner UI events. In Enhancement Two, I added a TrendCalculator that produces rolling averages, weekly change analysis, alerts, and a better goal comparison. Those earlier enhancements gave this third enhancement a stronger foundation because the database changes touch the entities, DAOs, repository, ViewModels, and user interface.
I selected this artifact for the databases category because the original CS 360 version had several database gaps that became clear during my first code review. The weight-tracking option required three tables: one for users, one for daily weight entries, and one for a goal weight. My original app only had users and weights, while the goal was represented by a hardcoded value in the Activity. There was also no relationship between a user and a weight entry, so multiple users on the same device would share the same list of entries. Finally, passwords were stored as plain text, which created an obvious security and privacy weakness. This enhancement focused on correcting those database design issues while keeping the app understandable as a senior college project.
The first major change was password hashing. I added a PasswordHasher utility class that uses SHA-256 with a per-user salt. When a user registers, the app generates sixteen random bytes with SecureRandom, combines that salt with the submitted password, hashes the result, and stores the value as a Base64-encoded salt and hash separated by a colon. During login, LoginViewModel retrieves the user by username and calls PasswordHasher.verifyPassword to compare the submitted password against the stored salted hash. The comparison uses MessageDigest.isEqual, which helps reduce timing leakage compared with a simple string comparison. This is not the same as a full production authentication system, but it is a clear improvement over storing readable passwords in SQLite and it shows a stronger security mindset.
The second major change was the schema redesign. I added a Goal entity, which gives the project the third table the original CS 360 requirement called for. Goal stores the userId, target weight, and creation date. I also updated WeightEntry so each row now has a userId foreign key back to the users table. Both the weights and goals tables use cascade delete so that if a user were ever removed, their dependent rows would not be left behind as orphan data. I also added indexes on userId and a composite index on userId and date for weight entries, which supports the way the app filters and analyzes entries for one user at a time. Feedback from earlier milestones pushed me to document these trade-offs inside the code itself rather than only in the narrative, so the reasoning behind each decision is now visible to anyone reading the source.
Threading user ownership through the app was one of the most important parts of the enhancement. LoginActivity passes the logged-in username to DataGridActivity through an Intent extra. DataGridActivity passes that userId into DataGridViewModel through the ViewModel factory, and the ViewModel passes it into repository calls. The DAO queries are now scoped by userId, so one user's entries and goals stay separate from another user's data on the same device. I also replaced the hardcoded goal with a Set Goal dialog. The goal value is validated with the same InputValidator style used elsewhere in the app, saved through GoalDao, and then used by the trend analysis so the goal-reached event is based on a real user-defined target.
I made one practical trade-off in AppDatabase by using fallbackToDestructiveMigration when moving the schema to version 2. In this capstone version, that keeps the app from crashing if an older local database exists. The original schema did not connect weight rows to users, so a real migration would need a careful decision about how to assign old entries to an account. In a production app, I would replace destructive migration with an explicit migration strategy and test it with real upgrade data. For this school project, the trade-off is acceptable because the main goal is demonstrating the improved schema and database relationships, and the decision is called out honestly in both the code and this narrative.
I also expanded the testing story for this database enhancement. The existing unit tests already covered validation, SingleEvent behavior, password hashing, and the trend calculator. I added a Robolectric database test that can run from Android Studio without a connected emulator. That test uses an in-memory Room database and verifies user-scoped weight queries, foreign key enforcement, newest-goal behavior, goal updates, and repository trend computation through Room. I also added an instrumented database test suite under androidTest for running against a real Android runtime when a device or emulator is available. Those instrumented tests cover duplicate username handling, scoped and ordered weight queries, inclusive date ranges, update and delete behavior, foreign key failures, goal CRUD, repository trend analysis, and LiveData emission from the Room query.
This enhancement aligns most strongly with Course Outcome 5 because it focuses on privacy, security, and data integrity. Storing salted password hashes, scoping all entries by userId, enforcing foreign keys, and using cascade delete all come from thinking about how the app's data could be misused or become inconsistent. It also supports Outcome 4 through standard Android database practices with Room entities, DAOs, indexes, LiveData queries, and a repository layer. Outcome 3 is shown through the indexing and relationship trade-offs, especially choosing userId and date as the main query path for the time-series data. Outcomes 1 and 2 are supported by the cleaner organization and the comments and narrative that make the code easier to explain during a review.
The biggest thing I learned from this enhancement is that a database change is never isolated once an app is already layered. Adding userId to WeightEntry meant changing the entity, DAO, repository, ViewModel, Activity, and tests. It was not hard in any single place, but it required consistency across the whole app. I also learned that testing the database layer matters more once the app has real relationships and constraints, because those constraints can silently pass in development and only surface under certain conditions at runtime. Incorporating instructor feedback from previous milestones helped me get ahead of those issues by confirming the exact entity structure and DAO signatures before writing any of the migration and test code. The final version is still simple enough to fit the course project, but it now represents the original idea much better: users have their own data, goal weights are actually stored, and password storage is handled with more care than the first version.
Course Outcome Coverage
How the capstone artifacts demonstrate the five CS program outcomes.
Downloads
Everything in one place. Narratives, self-assessment, slides, and source.