{"collectionMetadata":{"baseUrl":"https://s3-us-west-2.amazonaws.com/levi-portfolio-media","thumbnailName":"thumbnail.png","thumbnailDimensions":{"width":128,"height":128},"logoName":"logo.png"},"posts":[{"id":"about-levi","titleShort":"About\nLevi","titleLong":"About Levi","urls":{"githubProfile":"https://github.com/levilindsey","facebook":"https://facebook.com/levislindsey","twitter":"https://twitter.com/levisl","linkedin":"https://linkedin.com/in/levi-lindsey","youtube":"https://www.youtube.com/playlist?list=PLIuJN99AFOPSF3p4f0siFo22XiMXqijQU","itchio":"https://levilindsey.itch.io/","snoring-cat":"https://snoringcat.games","resume":"https://docs.google.com/document/d/1RKgxLzazYLZiIJq_sJpmKtBDtCvaEdlgs_rUBd-ZZyk/preview#"},"jobTitle":"Frontend Software Engineer","location":"Seattle, WA","date":{"start":"4/27/1988","end":"Present","tieBreaker":9},"categories":[],"images":[{"fileName":"levi-in-savitri.jpg","description":"Levi playing Satyavan in the opera \"Savitri\" by Gustav Holst. In this scene, Satyavan is warning off whomever is lurking in the forest."},{"fileName":"jackie-and-levi-in-maui.jpg","description":"Levi with his wife, Jackie, in Maui."},{"fileName":"roydor-commission-levi-solo-480.png","description":"A pixel-art avatar a friend commissioned for Levi."},{"fileName":"levi-at-forge.jpg","description":"Levi designed and built a four-burner, propane-powered forge for his father."},{"fileName":"levi-in-the-bartered-bride.jpg","description":"Levi playing Vašek in the opera \"The Bartered Bride\" by Bedřich Smetana. In this scene, Vašek, a stuttering fool, is musing about about his forthcoming arranged marriage."},{"fileName":"jackie-and-levi-wedding-toast.jpg","description":"Levi and Jackie at their wedding."},{"fileName":"levi-at-camp-hahobas.jpg","description":"Levi leading a campfire song about peeling a banana. He was a counselor and Life Guard at Camp Hahobas, a Boy Scout summer camp."},{"fileName":"levi-head-shot.jpg","description":"Levi at the University of Washington's Friday Harbor Laboratories on San Juan Island."}],"videos":[],"content":"## The Bio\r\n\r\nLevi is a Frontend Software Engineer. He's also constantly tinkering on game-dev side projects—which are all highlighted in this portfolio.\r\n\r\nLevi is originally from from Olympia, WA. He now lives in Seattle, WA with his wife Jackie and their daughter.\r\n\r\nOutside the realm of computer science, Levi has many hobbies and interests. First and foremost, he is a musician. Levi is a classically trained singer and has led in many operas and musicals; some of his more notable performances include: [Dido and Aeneas][dido-and-aeneas-url] (Aeneas), [Savitri][savitri-url] (Satyavan), and [My Fair Lady][my-fair-lady-url] (Henry Higgins). Levi also [sings with a capella groups][love-like-you-url] and [plays a lot of ukulele][down-today-url]. Check out some of his [recordings on YouTube][youtube-url]!\r\n\r\nSome of his other hobbies include: board games/card games, laser-cutting, brewing beer, blacksmithing, scuba diving, and a never-ending list of home improvements.\r\n\r\nLevi was also an Eagle Scout, and spent his highschool summers as a camp counsellor and lifeguard at [Camp Hahobas][camp-hahobas-url], a boy scout camp in Washington state.\r\n\r\n## The Cover-Letter Schpiel\r\n\r\nLevi is a seasoned technical leader with experience guiding projects through all stages of development and across a wide array of platforms.\r\n\r\n- He invents novel interactive experiences.\r\n- He develops with an emphasis on maintainable and self-documented code.\r\n- He designs maintainable solutions to high-level problems, in ambiguous problem spaces, with complicated dependencies.\r\n- He leads teams, designs technical roadmaps, and coordinates timelines and dependencies.\r\n- He advocates open-source technology. You can fork all of his many side-projects at [github.com/levilindsey](https://github.com/levilindsey)! \r\n\r\n\r\n[github-url]: https://github.com/levilindsey\r\n[google-url]: https://google.com/about\r\n[gcp-url]: https://cloud.google.com\r\n[jackie-url]: http://www.jackieandlevi.com/jackie\r\n[dido-and-aeneas-url]: https://en.wikipedia.org/wiki/Dido_and_Aeneas\r\n[savitri-url]: https://en.wikipedia.org/wiki/Savitri_(opera)\r\n[my-fair-lady-url]: https://en.wikipedia.org/wiki/My_Fair_Lady\r\n[youtube-url]: https://www.youtube.com/playlist?list=PLIuJN99AFOPSF3p4f0siFo22XiMXqijQU\r\n[love-like-you-url]: https://www.youtube.com/watch?v=yH7L_bZSwbM&list=PLIuJN99AFOPSF3p4f0siFo22XiMXqijQU\r\n[down-today-url]: https://youtu.be/HmALRuBoDno\r\n[camp-hahobas-url]: https://web.archive.org/web/20160807121041/http://www.hahobas.org/\r\n"},{"id":"snoring-cat","titleShort":"Snoring Cat LLC","titleLong":"Snoring Cat LLC","urls":{"snoring-cat":"https://snoringcat.games"},"jobTitle":"Owner and manager","location":"Seattle, WA","date":{"start":"2/2021","end":"Present","tieBreaker":6},"categories":["work","art","animation","music","godot","game","2D","solo-work"],"images":[{"fileName":"icon-512.png","description":"A pixel-art sleeping cat."}],"videos":[],"content":"Levi formed Snoring Cat LLC to publish some of his more substantial games and software projects under.\r\n\r\nCheck out Snoring Cat LLC's site at [snoringcat.games](https://snoringcat.games).\r\n\r\nIf you're curious about what-all went into forming this LLC, check out [this blog post](https://blog.levi.dev/2021/02/snoring-cat-forming-llc.html).\r\n"},{"id":"game-dev-sabbatical","titleShort":"Sabbatical:\nGame dev\nexploration","titleLong":"Sabbatical: Adventures in game development","urls":{"blog":"https://blog.levi.dev","snoring-cat":"https://snoringcat.games"},"jobTitle":"","location":"","date":{"start":"1/2021","end":"Present","tieBreaker":5},"categories":["work","art","animation","music","godot","game","2D"],"images":[{"fileName":"logo.png","description":"A simple game-dev icon Levi made."}],"videos":[],"content":"Levi's tinkering with [Godot](https://godotengine.org/) and 2D platformers, while on sabbatical from his job at Google.\r\n\r\nFollow along by subscribing to Levi's [blog](https://blog.levi.dev)!\r\n\r\nIn particular, if you're curious about Levi's motivations and goals for his sabbatical, check out [his first post](https://blog.levi.dev/2021/01/wait-what-am-i-doing.html).\r\n\r\nLevi also formed Snoring Cat LLC to publish some of his more substantial games and software projects under. Check out Snoring Cat LLC's site at [snoringcat.games](https://snoringcat.games)."},{"id":"surfacer","titleShort":"Platformer\nprocedural\npathfinding","titleLong":"Surfacer: A procedural pathfinding 2D-platformer framework for Godot","urls":{"demo":"https://snoringcat.games/play/squirrel-away","github":"https://github.com/SnoringCatGames/surfacer"},"jobTitle":"","location":"","date":{"start":"2/2019","end":"Present","tieBreaker":3},"categories":["side-project","app","godot","game","2D","library"],"images":[{"fileName":"surfaces-and-edges.png","description":"The Surfacer framework works pre-parsing a level into a \"platform graph\". The nodes are represented by points along the different surfaces in the level (floors, walls, and ceilings). The edges are represented by possible movement trajectories between points along surfaces."},{"fileName":"navigator-preselection.png","description":"A* search is used to find paths through the platform graph."},{"fileName":"edge-step-calculation-debugging.png","description":"Surfacer includes a powerful platform graph inspector, which makes it easy to understand and debug how the platform graph was calculated."}],"videos":[{"videoHost":"youtube","id":"2Q15fjAEncg","description":"A demonstration of the Surfacer framework in action. A cat is controlled by mouse clicks to navigate through a level of 2D platforms."}],"content":"_Surfacer is owned by [Snoring Cat LLC](https://snoringcat.games)._\r\n\r\n_A procedural pathfinding 2D-platformer framework for [Godot](https://godotengine.org/)._\r\n\r\n_\"Surfacer\": Like a platformer, but with walking, climbing, and jumping on all surfaces!_\r\n\r\n![Surfaces and edges in a plattform graph](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/surfaces-and-edges.png)\r\n\r\n## What is this?\r\n\r\n**tl;dr**: Surfacer works by **pre-parsing** a level into a **\"platform graph\"**. The **nodes** are represented by points along the different surfaces in the level (floors, walls, and ceilings). The **edges** are represented by possible movement trajectories between points along surfaces. There are different types of edges for different types of movement (e.g., jumping from a floor to a floor, falling from a wall, walking along a floor). At run time, **[A* search](https://en.wikipedia.org/wiki/A*_search_algorithm)** is used to calculate a path to a given destination.\r\n\r\nSome features include:\r\n- Walking on floors, climbing on walls, climbing on ceilings, jumping and falling from anywhere.\r\n- [Variable-height jump and fast-fall](https://kotaku.com/the-mechanics-behind-satisfying-2d-jumping-1761940693).\r\n- Adjusting movement trajectories around intermediate surfaces (such as jumping over a wall or under an overhang).\r\n- Configurable movement parameters on a per-player basis (e.g., horizontal acceleration, jump power, gravity, collision boundary shape and size, which types of edge movement are allowed).\r\n- Level creation using Godot's standard pattern with a [TileMap in the 2D scene editor](https://docs.godotengine.org/en/3.2/tutorials/2d/using_tilemaps.html).\r\n- Preparsing the level into a platform graph, and using A* search for efficient path-finding at runtime.\r\n- A powerful inspector for analyzing the platform graph, in order to debug and better understand how edges were calculated.\r\n\r\n## Buy why?\r\n\r\nBecause there aren't many other good tools out there for intelligent pathfinding in a platformer.\r\n\r\nThe vast majority of platformers use pretty simple computer-player AI for movement--for example:\r\n- Walk to edge, turn around, repeat.\r\n- Jump continuously, moving forward.\r\n- Move with a regular bounce or surface-following pattern.\r\n- Move horizontally toward the human player, \"floating\" vertically as needed in order to move around obstacles and platforms.\r\n\r\nMost examples of more sophisticated AI pathfinding behavior are usually still pretty limited. One common technique uses machine-learning and is trained by hundreds to thousands of human-generated jumps on an explicit pre-fabricated level. This makes level-generation difficult and is not flexible to dynamic platform creation/movement.\r\n\r\nThere are two key reasons why good path-finding AI isn't really used in platformers:\r\n1. It's hard to implement right; there is a lot of math involved, and there are a lot of different edge cases to account for.\r\n2. Dumb AI is usually plenty effective on its own to create compelling gameplay. The user often doesn't really notice or care how simple the behavior is.\r\n\r\nBut there are use-cases for which we really benefit from an AI that can accurately immitate the same movement mechanics of the player. One example is if we want to be able to control the player by tapping on locations that they should move through the level toward. Another example is if we want to have a flexible game mode in which a computer player can swap in for a human player depending on how many humans are present.\r\n\r\n## Platformer AI\r\n\r\n### The platform graph: Pre-parsing the world\r\n\r\nSurfacer depends on the level being represented as a [`TileMap`](https://docs.godotengine.org/en/stable/classes/class_tilemap.html#class-tilemap).\r\n\r\nIn order for our AI to traverse our world, we first need to parse the world into a platform graph. We do this up-front, when the level is loaded, so that we can efficiently search the graph at run time. Dynamic updates to the graph can be performed at runtime, but these could be expensive if not done with care.\r\n\r\nThe nodes of this graph correspond to positions along distinct surfaces. Since our players can walk on floors, climb on walls, and climb on ceilings, we store floor, wall, and ceiling surfaces.\r\n\r\nThe edges of this graph correspond to a type of movement that the player could perform in order to move from one position on a surface node to another.\r\n- These edges are directional, since the player may be able to move from A to B but not from B to A.\r\n- The ends of an edge could be along the same surface or on different surfaces (e.g., for climbing up a wall vs jumping from a floor).\r\n- There could be multiple edges between a single pair of nodes, since there could be multiple types of movement that could get the player from the one to the other.\r\n- These edges are specific to a given player type. If we need to consider a different player that has different movement parameters, then we need to calculate a separate platform graph for that player.\r\n\r\n![Surfaces in a plattform graph](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/surfaces.png)\r\n\r\n![Edges in a plattform graph](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/edges.png)\r\n\r\n### Nodes: Parsing a Godot `TileMap` into surfaces\r\n\r\n> **NOTE:** The following algorithm assumes that the given `TileMap` only uses tiles with convex collision boundaries.\r\n\r\n#### Parse individual tiles into their constituent surfaces\r\n\r\n- Map each `TileMap` cell into a polyline that corresponds to the top-side/floor portion of its collision polygon.\r\n - Calculate whether the collision polygon's vertices are specified in a clockwise order.\r\n - Use this to determine the iteration step size.\r\n - `step_size = 1` if clockwise; `step_size = -1` if counter-clockwise.\r\n - Regardless of whether the vertices are specified in a clockwise order, we will iterate over them in clockwise order.\r\n - Find both the leftmost and rightmost vertices.\r\n - Start with the leftmost vertex.\r\n - If there is a wall segment on the left side of the polygon, then this vertex is part of it.\r\n - If there is no wall segment on the left side of the polygon, then this vertex must be the cusp between a preceding bottom-side/ceiling segment and a following top-side/floor segment (i.e., the previous segment is underneath the next segment).\r\n - Even if there is no segment along one side, we store a surface for that side; this surface is only represented by a single point.\r\n - Iterate over the following vertices until we find a non-wall segment (this could be the first segment, the one connecting to the leftmost vertex).\r\n - Wall segments are distinguished from floor/ceiling segments according to their angle. This is configurable, but typically, a segment up to 45-degrees is a floor/ceiling and a segment steeper than 45-degrees is a wall.\r\n - This non-wall segment must be the start of the top-side/floor polyline.\r\n - Iterate, adding segments to the result polyline, until we find either a wall segment or the rightmost vertex.\r\n - We then also save a mapping from a `TileMap` cell index to each of the different surfaces we've calculated as existing in that cell.\r\n- Repeat the above process for the right-side, left-side, and bottom-side surfaces.\r\n\r\n#### Remove internal surfaces\r\n\r\n> **NOTE:** This will only detect internal surface segments that are equivalent with another internal segment. But for grid-based tiling systems, this can often be enough.\r\n\r\n- Check for pairs of floor+ceiling segments or left-wall+right-wall segments, such that both segments share the same vertices.\r\n- Remove both segments in these pairs.\r\n\r\n#### Merge any connecting surfaces\r\n\r\n- Iterate across each floor surface A.\r\n- Nested iterate across each other floor surface B.\r\n - Ideally, we should be using a spatial data structure that allows us to only consider nearby surfaces during this nested iteration (such as an R-Tree).\r\n- Check whether A and B form a \"continuous\" surface.\r\n - A and B are both polylines that only have two end points.\r\n - Just check whether either endpoint of A equals either endpoint of B.\r\n - Actually, our original `TileMap` parsing results in every surface polyline being stored in clockwise order, so we only need to compare the end of A with the start of B and the start of A with the end of B.\r\n- If they do:\r\n - Merge B into A.\r\n - Optionally, remove any newly created redundant internal colinear points.\r\n - Remove B from the surface collection.\r\n- Repeat the iteration until no merges were performed.\r\n\r\n#### Record adjacent neighbor surfaces\r\n\r\n- Every surface should have both adjacent clockwise and counter-clockwise neighbor surfaces.\r\n- Use a similar process as above for finding surfaces with matching end positions.\r\n\r\n### Edges: Calculating jump movement trajectories\r\n\r\n**tl;dr**: The Surfacer framework uses a procedural approach to calculate trajectories for movement between surfaces. The algorithms used rely heavily on the classic [one-dimensional equations of motion for constant acceleration](https://physics.info/motion-equations/). These trajectories are calculated to match to the same abilities and limitations that are exhibited by corresponding human-controlled movement. After the trajectory for an edge is calculated, it is translated into a simple instruction/input-key start/end sequence that should reproduce the calculated trajectory.\r\n\r\n> **NOTE:** A machine-learning-based approach would probably be a good alternate way to solve this general problem. However, one perk of a procedural approach is that it's relatively easy to understand how it works and to modify it to perform better for any given edge-case (and there are a _ton_ of edge-cases).\r\n\r\n#### The high-level steps\r\n\r\n- Determine how high we need to jump in order to reach the destination.\r\n- If the destination is out of reach (vertically or horizontally), ignore it.\r\n- Calculate how long it will take for vertical motion to reach the destination from the origin.\r\n- We will define the movement trajectory as a combination of two independent components: a \"vertical step\" and a \"horizontal step\". The vertical step is based primarily on on the jump duration calculated above.\r\n- Calculate the horizontal step that would reach the destination displacement over the given duration.\r\n- Check for any unexpected collisions along the trajectory represented by the vertical and horizontal steps.\r\n - If there is an intermediate surface that the player would collide with, we need to try adjusting the jump trajectory to go around either side of the colliding surface.\r\n - We call these points that movement must go through in order to avoid collisions, \"waypoints\".\r\n - Recursively check whether the jump is valid to and from either side of the colliding surface.\r\n - If we can't reach the destination when moving around the colliding surface, then try backtracking and consider whether a higher jump height from the start would get us there.\r\n - If there is no intermediate collision, then we can calculate the ultimate edge movement instructions for playback based on the vertical and horizontal steps we've calculated.\r\n\r\n#### Some important aspects\r\n\r\n- We treat horizontal and vertical motion as independent to each other. This greatly simplifies our calculations.\r\n - We calculate the necessary jump duration--and from that the vertical component of motion--up-front, and use this to determine times for each potential step and waypoint of the motion. Knowing these times up-front makes the horizontal min/max calculations easier.\r\n- We have a broad-phase check to quickly eliminate possible surfaces that are obviously out of reach.\r\n - This primarily looks at the horizontal and vertical distance from the origin to the destination.\r\n\r\n#### Calculating \"good\" jump and land positions\r\n\r\nDeciding which jump and land positions to base an edge calculation off of is non-trivial. We could just try calculating edges for a bunch of different jump/land positions for a given pair of surfaces. But edge calculations aren't cheap, and executing too many of them impacts performance. So it's important that we carefully choose \"good\" jump/land positions that have a relatively high likelihood of producing a valid and efficient edge.\r\n\r\nAdditionally, when jumping from a floor, we need to determine what initial horizontal velocity to use for the edge calculation. This horizontal start velocity can then influence the jump/land positions.\r\n\r\n- Some interesting jump/land positions for a surface include the following:\r\n - Either end of the surface.\r\n - The closest position along the surface to either end of the other surface.\r\n - This closest position, but with a slight offset to account for the width of the player.\r\n - This closest position, but with an additional offset to account for horizontal or vertical displacement with minimum jump time and maximum horizontal velocity.\r\n - This offset becomes important when considering jumps that start with max-speed horizontal velocity, which could otherwise overshoot the land position if we didn't account for the offset.\r\n - The closest interior position along the surface to the closest interior position along the other surface.\r\n - The position along a horizontal surface that is behind the overall connected region that the vertical land surface is a part of.\r\n - This position is important if we need to consider movement around behind a wall that then lands on the top of the wall.\r\n- We try to minimize the number of jump/land positions returned, since having more of these greatly increases the overall time to parse the platform graph.\r\n- We usually consider surface-interior points before surface-end points (which usually puts shortest distances first).\r\n- We also decide start velocity when we decide the jump/land positions.\r\n - We only ever consider start velocities with zero or max speed.\r\n- Additionally, we often quit early as soon as we've calculated the first valid edge for a given pair of surfaces.\r\n - In order to decide whether to skip an edge calculation for a given jump/land position pair, we look at how far away it is from any other jump/land position pair that we already found a valid edge for, on the same surface, for the same surface pair. If it's too close, we skip it.\r\n - This is another important performance optimization.\r\n\r\nUnfortunately, most jump/land position calculations are highly dependent on the types and spatial arrangement of the two surfaces. There are many possible combinations, and the most of these combinations must be considered individually. The following diagrams illustrate the many different jump/land combinations.\r\n\r\n![A legend for the illustrations of jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-legend.png)\r\n\r\n![Illustrations of floor-to-floor jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-floor-to-floor.png)\r\n\r\n![Illustrations of floor-to-wall jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-floor-to-wall.png)\r\n\r\n![Illustrations of wall-to-floor jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-wall-to-floor.png)\r\n\r\n![Illustrations of wall-to-opposite-facing-wall jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-wall-to-opposite-wall.png)\r\n\r\n![Illustrations of wall-to-same-facing-wall jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-wall-to-same-wall.png)\r\n\r\n![Illustrations of floor-to-ceiling jump-land-position combinations](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/jump-land-positions-floor-to-ceiling.png)\r\n\r\n#### Calculating the start velocity for a jump\r\n\r\n- In the general case, we can't know at build-time what direction along a surface the player will\r\n be moving from when they need to start a jump.\r\n- Unfortunately, using start velocity x values of zero for all jump edges tends to produce very\r\n unnatural composite trajectories (similar to using perpendicular Manhatten distance routes\r\n instead of more diagonal routes).\r\n- So, we can assume that for surface-end jump-off positions, we'll be approaching the jump-off\r\n point from the center of the edge.\r\n- And for most edges we should have enough run-up distance in order to hit max horizontal speed\r\n before reaching the jump-off point--since horizontal acceleration is relatively quick.\r\n- Also, we only ever consider velocity-start values of zero or max horizontal speed. Since the\r\n horizontal acceleration is quick, most jumps at run time shouldn't need some medium-speed. And\r\n even if they did, we force the initial velocity of the jump to match expected velocity, so the\r\n jump trajectory should proceed as expected, and any sudden change in velocity at the jump start\r\n should be acceptably small.\r\n\r\n#### Calculating the total jump duration (and the vertical step for the edge)\r\n\r\n- At the start of each edge-calculation traversal, we calculate the minimum total time needed to reach the destination.\r\n - If the destination is above, this might be the time needed to rise that far in the jump.\r\n - If the destination is below, this might be the time needed to fall that far (still taking into account any initial upward jump-off velocity).\r\n - If the destination is far away horizontally, this might be the time needed to move that far horizontally (taking into account the horizontal movement acceleration and max speed).\r\n - The greatest of these three possibilities is the minimum required total duration of the jump.\r\n- The minimum peak jump height can be determined from this total duration.\r\n- All of this takes into account our variable-height jump mechanic and the difference in slow-ascent and fast-fall gravities.\r\n - With our variable-height jump mechanic, there is a greater acceleration of gravity when the player either is moving downward or has released the jump button.\r\n - If the player releases the jump button before reaching the maximum peak of the jump, then their current velocity will continue pushing them upward, but with the new stronger gravity.\r\n - To determine the duration to the jump peak height in this scenario, we first construct two instances of one of the basic equations of motion--one for the former part of the ascent, with the slow-ascent gravity, and one for the latter part of the ascent, with the fast-fall gravity. We then use algebra to substitute the equations and solve for the duration.\r\n\r\n#### Calculating the horizontal steps in an edge\r\n\r\n- If we decide whether a surface could be within reach, we then check for possible collisions between the origin and destination.\r\n - To do this, we simulate frame-by-frame motion using the same physics timestep and the same movement-update function calls that would be used when running the game normally. We then check for any collisions between each frame.\r\n- If we detect a collision, then we define two possible \"waypoints\"--one for each end of the collided surface.\r\n - In order to make it around this intermediate surface, we know the player must pass around one of the ends of this surface.\r\n - These waypoints we calculate represent the minimum required deviation from the player's original path.\r\n- We then recursively check whether the player could move to and from each of the waypoints.\r\n - We keep the original vertical step and overall duration the same.\r\n - We can use that to calculate the time and vertical state that must be used for the waypoint.\r\n - Then we only really consider whether the horizontal movement could be valid within the the given time limit.\r\n- If so, we concatenate and return the horizontal steps required to reach the waypoint from the original starting position and the horizontal steps required to reach the original destination from the waypoint.\r\n\r\n#### Backtracking to consider a higher max jump height\r\n\r\n- Sometimes, a waypoint may be out of reach, when we're calculating horizontal steps, given the current step's starting position and velocity.\r\n- However, maybe the waypoint could be within reach, if we had originally jumped a little higher.\r\n- To account for this, we backtrack to the start of the overall movement traversal and consider whether a higher jump could reach the waypoint.\r\n - The destination waypoint is first updated to support a new jump height that would allow for a previously-out-of-reach intermediate waypoint to also be reached.\r\n - Then all steps are re-calculated from the start of the movement, while considering the new destination state.\r\n- If it could, we return that result instead.\r\n\r\n#### Waypoint calculations\r\n\r\n- We calculate waypoints before steps.\r\n - We calculate a lot of state to store on them, and then depend on this state during step calculation.\r\n - Some of this state includes:\r\n - The time for passing through the waypoint (corresponding to the overall jump height and edge duration).\r\n - The horizontal direction of movement through the waypoint (according to the direction of travel from the previous waypoint or according to the direction of the surface).\r\n - The min and max possible x-velocity when the movement passes through this waypoint.\r\n - With a higher speed through a waypoint, we could reach further for the next waypoint, or we could be stuck overshooting the next waypoint. So it's useful to calculate the range of possible horizontal velocities through a waypoint.\r\n - The actual x-velocity for movement through the waypoint is calculated later when calculating the cooresponding movement step.\r\n - We typically try to use an x-velocity that will minimize speed through the waypoint, while still satisfying the horizontal step displacement and the waypoint's min/max limitations.\r\n- Here's the sequence of events for waypoint calculations:\r\n - Start by calculating origin and destination waypoints.\r\n - For the origin waypoint, min, max, and actual x-velocity are all zero.\r\n - For the destination waypoint, min and max are assigned according to how acceleration can be applied during the step (e.g., at the start or at the end of the interval).\r\n - Then, during step calculation traversal, when a new intermediate waypoint is created, its min and max x-velocity are assigned according to both the min and max x-velocity of the following waypoint and the actual displacement and duration of the step from the new waypoint to the next waypoint.\r\n - Intermediate waypoints are calculated with pre-order tree traversal.\r\n - This poses a small problem:\r\n - The calculation of a waypoint depends on the accuracy of the min/max x-velocity of it's next waypoint.\r\n - However, the min/max x-velocity of the next waypoint could need to be updated if it in turn has a new next waypoint later on.\r\n - Additionally, a new waypoint could be created later on that would become the new next waypoint instead of the old next waypoint.\r\n - To ameliorate this problem, everytime a new waypoint is created, we update its immediate neighbor waypoints.\r\n - These updates do not solve all cases, since we may in turn need to update the min/max x-velocities and movement sign for all other waypoints. And these updates could then result in the addition/removal of other intermediate waypoints. But we have found that these two updates are enough for most cases. If we detect that a neigbor waypoint would be invalidated during an update, we abandon the edge calculation, which could result in a false-negative result.\r\n - Steps are calculated with in-order tree traversal (i.e., in the same order they'd be executed when moving from origin to destination).\r\n\r\n#### Fake waypoints\r\n\r\n- When calcuting steps to navigate around a collision with a ceiling or floor surface, sometimes one of the two possible waypoints is what we call \"fake\".\r\n- A fake waypoint corresponds to the left side of the floor/ceiling surface when movement from the previous waypoint is rightward (or to the right side when movement is leftward).\r\n- In this case, movement will need to go around both the floor/ceiling as well as its adjacent wall surface.\r\n- The final movement trajectory should not end-up moving through the fake waypoint.\r\n- The actual waypoint that the final movement should move through, is instead the \"real\" waypoint that cooresponds to the far edge of this adjacent wall surface.\r\n- So, when we find a fake waypoint, we immediately replace it with its adjacent real waypoint.\r\n- Example scenario:\r\n - Origin is waypoint #0, Destination is waypoint #3\r\n - Assume we are jumping from a low-left platform to a high-right platform, and there is an intermediate block in the way.\r\n - Our first step attempt hits the underside of the block, so we try waypoints on either side.\r\n - After trying the left-hand waypoint (#1), we then hit the left side of the block. So we then try a top-side waypoint (#2).\r\n - (Bottom-side fails the surface-already-encountered check).\r\n - After going through this new left-side (right-wall), top-side waypoint (#2), we can successfully reach the destination.\r\n - With the resulting scenario, we shouldn't actually move through both of the intermediate waypoints (#1 and #2). We should should instead skip the first intermediate waypoint (#1) and go straight from the origin to the second intermediate waypoint (#2).\r\n\r\n> TODO: screenshot of example scenario\r\n\r\n#### Collision calculation madness\r\n\r\n**tl;dr**: Godot's collision-detection engine is very broken. We try to make it work for our\r\npathfinding, but there are still false negatives and rough edges.\r\n\r\nHere's a direct quote from a comment in Godot's underlying collision-calculation logic:\r\n\r\n> give me back regular physics engine logic
\r\n> this is madness
\r\n> and most people using this function will think
\r\n> what it does is simpler than using physics
\r\n> this took about a week to get right..
\r\n> but is it right? who knows at this point..
\r\n\r\n(https://github.com/godotengine/godot/blob/a7f49ac9a107820a62677ee3fb49d38982a25165/servers/physics_2d/space_2d_sw.cpp#L692)\r\n\r\nSome known limitations and rough edges include:\r\n- When a [`KinematicBody2D`](https://docs.godotengine.org/en/stable/classes/class_kinematicbody2d.html) is sliding around a corner of another collidable, Godot can sometimes calculate the wrong results (oppositite direction) for `is_floor()`/`is_ceiling()`.\r\n- Inconsistency between the behavior of the [`KinematicBody2D`](https://docs.godotengine.org/en/stable/classes/class_kinematicbody2d.html) and [`Physics2DDirectSpaceState`](https://docs.godotengine.org/en/stable/classes/class_physics2ddirectspacestate.html) collision APIs.\r\n - We were originally using the Physics2DDirectSpaceState for most of our graph calculations. However, this API seems to be more broken than the KinematicBody2D API. Also, we're using the KinematicBody2D API at run time, so we see more consistent results by using the KinematicBody2D API at build time as well.\r\n\r\n### Navigator: Using the platform graph to move from A to B\r\n\r\nOnce the platform graph has been parsed, finding and moving along a path through the graph is relatively straight-forward. The sequence of events looks like the following:\r\n\r\n- Given a target point to navigate towards and the player's current position.\r\n- Find the closest point along the closest surface to the target point.\r\n- Use A* search to find a path through the graph from the origin to the destination.\r\n - We can use distance or duration as the edge weights.\r\n- Execute playback of the instruction set for each edge of the path, in sequence.\r\n\r\n![Navigator finding a path to a destination](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/navigator-preselection.png)\r\n\r\n#### Dynamic edge optimization according to runtime approach\r\n\r\nAt runtime, after finding a path through build-time-calculated edges, we try to optimize the jump-off points of the edges to better account for the direction that the player will be approaching the edge from. This produces more efficient and natural movement. The build-time-calculated edge state would only use surface end-points or closest points. We also take this opportunity to update start velocities to exactly match what is allowed from the ramp-up distance along the edge, rather than either the fixed zero or max-speed value used for the build-time-calculated edge state.\r\n\r\n#### Edge instructions playback\r\n\r\nWhen we create the edges, we represent the movement trajectories according to the sequence of instructions that would produce the trajectory. Each instruction is simply represented by an ID for the relevant input key, whether the key is being pressed or released, and the time. The player movement system can then handle these input key events in the same way as actual human-triggered input key events.\r\n\r\n#### Correcting for runtime vs buildtime trajectory discrepancies\r\n\r\nWhen executing edge instructions, the resulting run-time trajectory is usually slightly off from the expected trajectory that was pre-calculated when creating the edge. This variance is usually pretty minor, but, just in case, a given player can be configured to use the exact pre-calculated edge trajectory rather than the run-time version.\r\n\r\nTheoretically, this discrepancy shouldn't exist, and we should be able to eliminate it at some point.\r\n\r\n## Platform graph inspector\r\n\r\nAs you might imagine, the calculations for these edges can get quite complicated. To make these calculations easier to understand and debug, we created a powerful platform graph inspector. This can be accessed from the inspector panel (the gear icon in the top-right corner of the screen).\r\n\r\n![Platform graph inspector](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/platform-graph.png)\r\n\r\nThe inspector is a tree-view widget with the following structure:\r\n\r\n```\r\n - Platform graph [player_name]\r\n - Edges [#]\r\n - [#] Edges calculated with increasing jump height\r\n - JUMP_INTER_SURFACE_EDGEs [#]\r\n - [(x,y), (x,y)]\r\n - Profiler\r\n - ...\r\n - EDGE_VALID_WITH_INCREASING_JUMP_HEIGHT [1]\r\n - 1: Movement is valid.\r\n - ...\r\n - ...\r\n - ...\r\n - [#] Edges calculated without increasing jump height\r\n - ...\r\n - [#] Edges calculated with one step\r\n - ...\r\n - Surfaces [#]\r\n - FLOORs [#]\r\n - [(x,y), (x,y)]\r\n - _# valid outbound edges_\r\n - _Destination surfaces:_\r\n - FLOOR [(x,y), (x,y)]\r\n - JUMP_INTER_SURFACE_EDGEs [#]\r\n - [(x,y), (x,y)]\r\n - Profiler\r\n - ...\r\n - EDGE_VALID_WITH_INCREASING_JUMP_HEIGHT [1]\r\n - 1: Movement is valid.\r\n - ...\r\n - ...\r\n - Failed edge calculations [#]\r\n - REASON_FOR_FAILING [(x,y), (x,y)]\r\n - Profiler\r\n - ...\r\n - REASON_FOR_FAILING [#]\r\n - 1: Step result info\r\n - 2: Step result info\r\n - ...\r\n - ...\r\n - ...\r\n - ...\r\n - ...\r\n - ...\r\n - ...\r\n - ...\r\n - Profiler\r\n - ...\r\n - Global counts\r\n - # total surfaces\r\n - # total edges\r\n - # JUMP_INTER_SURFACE_EDGEs\r\n - ...\r\n```\r\n\r\nEach entry in this inspector tree is encoded with annotation information which will render debugging info over the level for the corresponding entity. Additionally, each entry contains a detailed description. These are both shown when selecting the entry.\r\n\r\n![Edge-step calculation debugging annotation](https://s3-us-west-2.amazonaws.com/levi-portfolio-media/surfacer/edge-step-calculation-debugging.png)\r\n\r\n## Annotators\r\n\r\nWe include a large collection of annotators that are useful for visually debugging calculation of the platform graph. Some of these are rendered by selecting entries in the platform graph inspector and some of them can be toggled through checkboxes in the inspector panel.\r\n\r\n## Movement parameters\r\n\r\nWe support a large number of flags and parameters for adjusting various aspects of player/movement/platform-graph behavior. For a complete list of these params, see [MovementParams.gd](https://github.com/SnoringCatGames/surfacer/blob/master/src/platform_graph/edge/models/MovementParams.gd).\r\n\r\n## Extensible framework for custom movement mechanics\r\n\r\n> TODO: Describe this system. For now, look at the code under `src/player/action/action_handlers/` for examples.\r\n\r\n> **NOTE:** The procedural pathfinding logic is implemented independently of this framework. So, you can use this to add cool new movement for human-controlled movement, but the automatic pathfinding will only know about the specific default mechanics that it was designed around.\r\n\r\n## Notable limitations\r\n\r\n- Our build-time graph calculations take a long time, especially for a level with lots of surfaces (such as a big level, or a level with a small cell size).\r\n- There is slight discrepancy between discrete and continuous trajectories. The former is what we see from movement produced by the frame-by-frame application of gravity and input actions on the player. The latter is what we see from our precise numerical analysis of algebraic equations when pre-calculating the platform graph. We support a few different techniques for reconciling this:\r\n - `MovementParams.syncs_player_velocity_to_edge_trajectory`: When this flag is enabled, the player's run-time _velocity_ will be forced to match the expected pre-calculated (continuous) velocity for the current frame in the currently executing platform graph edge.\r\n - `MovementParams.syncs_player_position_to_edge_trajectory`: When this flag is enabled, the player's run-time _position_ will be forced to match the expected pre-calculated (continuous) velocity for the current frame in the currently executing platform graph edge.\r\n - `MovementParams.retries_navigation_when_interrupted`: When this flag is enabled, the navigator will re-attempt navigation to the original destination from the current position whenever it detects that the player has hit an unexpected surface, which is what can happen when the run-time discrete trajectories don't match build-time continuous trajectories.\r\n- When two surfaces face each other and are too close for thte player to fit between (plus a margin of a handful of extra pixels), our graph calculations can produce some false positives.\r\n- Surfacer doesn't currently fully support surfaces that consist of one point.\r\n- Our platform graph calculations produce false negatives for some types of jump edge scenarios:\r\n - An jump edge that needs to displace the jump position in order to make it around an intermediate waypoint with enough horizontal velocity to then reach the destination.\r\n - For example, if the player is jumping from the bottom of a set of stair-like surfaces, the jump position ideally wouldn't be as close as possible to the first rise of the first step (because they can't start accelerating horizontally until vertically clearing the top of the rise). Instead, if the player jumps from a slight offset from the rise, then they can pass over the rise with more speed, which lets them travel further during the jump.\r\n - A single horizontal step that needs multiple different sideways-movement instructions (i.e., accelerating to both one side and then the other in the same jump):\r\n - For example, backward acceleration in order to not overshoot the end position as well as forward acceleration to then have enough step-end x velocity in order to reach the following waypoint for the next step.\r\n- Surfacer is opinionated. It requires that you structure your app using TileMaps, specific node groups, and by subclassing certain framework classes in order to create players.\r\n - You need to define a set of input actions with the following names (via Project Settings > Input Map):\r\n - jump\r\n - move_up\r\n - move_down\r\n - move_left\r\n - move_right\r\n - dash\r\n - zoom_in\r\n - zoom_out\r\n - pan_up\r\n - pan_down\r\n - pan_left\r\n - pan_right\r\n - face_left\r\n - face_right\r\n - grab_wall\r\n - Your level collidable foreground tiles must be defined in a TileMap that belongs to the \"surfaces\" node group.\r\n - Surfacer uses a very specific set of movement mechanics.\r\n - Fortunately, this set includes most features commonly used in platforms and is able to provide pretty sophisticated movement.\r\n - But the procedural path-finding doesn't know about complex platformer mechanics like special in-air friction or coyote time.\r\n - The Surfacer framework isn't yet decoupled from the Squirrel Away demo app logic.\r\n\r\n## Tests\r\n\r\n_> **NOTE:** Sadly, the tests are not set up to automatically run on presubmit, so some of the tests are severely out-of-date and broken._\r\n\r\nSurfacer uses the [Gut tool](https://github.com/bitwes/Gut) for writing and running unit tests.\r\n\r\n## Licenses\r\n\r\n- All code is published under the [MIT license](https://github.com/SnoringCatGames/surfacer/blob/master/LICENSE).\r\n- This project depends on various pieces of third-party code that are licensed separately. [Here is a list of these third-party licenses](https://github.com/SnoringCatGames/surfacer/blob/master/src/config/SurfacerThirdPartyLicenses.gd).\r\n"},{"id":"scaffolder","titleShort":"Godot\napplication\nscaffolding","titleLong":"Scaffolder: Application scaffolding and utility functionality for Godot","urls":{"github":"https://github.com/SnoringCatGames/scaffolder"},"jobTitle":"","location":"","date":{"start":"11/2020","end":"Present","tieBreaker":2},"categories":["side-project","app","godot","game","2D","library"],"images":[{"fileName":"scaffolder-screenshot.png","description":"Scaffolder provides a bunch of general-purpose application scaffolding, such as a widget library and a a screen layout and navigation system."}],"videos":[],"content":"_Scaffolder is owned by [Snoring Cat LLC](https://snoringcat.games)._\r\n\r\nThis is an opinionated framework that provides a bunch of general-purpose application scaffolding and utility functionality for Godot games.\r\n\r\n## Features\r\n\r\n### Viewport scaling\r\n\r\nThis framework handles viewport scaling directly. You will need to turn off Godot's built-in viewport scaling (`Display > Window > Stretch > Mode = disabled`).\r\n\r\nThis provides some powerful benefits over Godot's standard behaviors, but requires you to be careful with how you define your GUI layouts.\r\n\r\n#### Handling camera zoom\r\n\r\nThis provides limited flexibility in how far the camera is zoomed. That is, you will be able to see more of the level on a larger screen, but not too much more of the level. Similarly, on a wider screen, you will be able to able to see more from side to side, but not too much more.\r\n\r\n- You can configure a minimum and maximum aspect ratio for the game region.\r\n- You can configure a default screen size and aspect ratio that the levels are designed around.\r\n- At runtime, if the current viewport aspect ratio is greater than the max or less than the min, bars will be shown along the sides or top and bottom of the game area.\r\n- At runtime, the camera zoom will be adjusted so that the same amount of level is showing, either vertically or horizontally, as would be visible with the configured default screen size. If the screen aspect ratio is different than the default, then a little more of the level is visible in the other direction.\r\n- Any annotations that are drawn in the separate annotations CanvasLayer are automatically transformed to match whatever the game-area's current zoom and position is.\r\n- Click positions can also be transformed to match the game area.\r\n\r\n#### Handling GUI scale\r\n\r\n- At runtime, a `gui_scale` value is calculated according to how the current screen resolution compares to the expected default screen resolution, as described above.\r\n- Then all fonts—which are registered with the scaffold configuration—are resized according to this `gui_scale`.\r\n- Then the size, position, and scale of all GUI nodes are updated accordingly.\r\n\r\n#### Constraints for how you define your GUI layouts\r\n\r\n> TODO: List any other constraints/tips.\r\n\r\n- Avoid custom positions, except maybe for centering images. For example:\r\n - Instead of encoding a margin/offset, use a VBoxContainer or HBoxContainer parent, and include an empty spacer sibling with size or min-size.\r\n - This is especially important when your positioning is calculated to include bottom/right-side margins.\r\n- Centering images:\r\n - To center an image, I often place a `TextureRect` inside of a `Control` inside of some time of auto-positioning container.\r\n - I then set the image position in this way: `TextureRect.rect_position = -TextureRect.rect_size/2`.\r\n - This wrapper pattern also works well when I need to scale the image.\r\n- In general, whenever possible, I find it helpful to use a VBoxContainer or HBoxContainer as a parent, and to have children use the shrink-center size flag for both horizontal and vertical directions along with a min-size.\r\n\r\n### Analytics\r\n\r\nThis feature depends on the proprietary third-party **[Google Analytics](https://analytics.google.com/analytics/web/#/)** service.\r\n\r\n- Fortunately, Google Analytics is at least free to use.\r\n- To get started with Google Analytics, [read this doc](https://support.google.com/analytics/answer/1008015?hl=en).\r\n- To learn more about the \"Measurement Protocol\" API that this class uses to send event info, [read this doc](https://developers.google.com/analytics/devguides/collection/protocol/v1).\r\n- To learn more about the \"Reporting API\" you could use to run arbitrary queries on your recorded analytics, [read this doc](https://developers.google.com/analytics/devguides/reporting/core/v4).\r\n - Alternatively, you could just use [Google's convenient web client](http://analytics.google.com/).\r\n\r\n#### \"Privacy Policy\" and \"Terms and Conditions\" documents\r\n\r\nIf you intend to record any sort of user data (including app-usage analytics or crash logs), you should create a \"Privacy Policy\" document and a \"Terms and Conditions\" document. These are often legally required when recording any sort of app-usage data. Fortunately, there are a lot of tools out there to help you easily generate these documents. You could then easily host these as view-only [Google Docs](https://docs.google.com/).\r\n\r\nHere are two such generator tools that might be useful, and at least have free-trial options:\r\n- [Termly's privacy policy generator](https://termly.io/products/privacy-policy-generator/?ftseo)\r\n- [Nishant's terms and conditions generator](https://app-privacy-policy-generator.nisrulz.com/)\r\n\r\n> _**DISCLAIMER:** I'm not a lawyer, so don't interpret anything from this framework as legal advice, and make sure you understand which laws you need to obey._\r\n\r\n### Automatic error/crash reporting\r\n\r\nThis feature currently depends on the proprietary third-party **[Google Cloud Storage](https://cloud.google.com/storage)** service. But you could easily override it to upload logs somewhere else.\r\n\r\n### Screen layout and navigation\r\n\r\n- You can control transitions through `Gs.nav`.\r\n- It is easy to include custom screens and exclude default screens.\r\n- Here are some of the default screns included:\r\n - Main menu\r\n - Credits\r\n - Settings\r\n - Configurable to display checkboxes, dropdowns, or plain text for whatever settings you might want to support.\r\n - Level select\r\n - Game/level\r\n - Pause\r\n - Notification\r\n - Configurable to display custom text and buttons as needed.\r\n - Game over\r\n\r\n### Lots of useful utility functions\r\n\r\nIt might just be easiest to scroll through some of the following files to see what sorts of functions are included:\r\n- [`Audio`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/Audio.gd)\r\n- [`CameraShake`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/CameraShake.gd)\r\n- [`DrawUtils`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/DrawUtils.gd)\r\n- [`Geometry`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/Geometry.gd)\r\n- [`Profiler`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/Profiler.gd)\r\n- [`SaveState`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/data/SaveState.gd)\r\n- [`Time`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/Time.gd)\r\n- [`Utils`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/utils/Utils.gd)\r\n\r\n### A widget library\r\n\r\nFor example:\r\n- [`AccordionPanel`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/gui/AccordionPanel.gd)\r\n- [`LabeledControlList`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/gui/labeled_control_list/LabeledControlList.gd)\r\n- [`ShinyButton`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/gui/ShinyButton.gd)\r\n- [`NavBar`](https://github.com/SnoringCatGames/scaffolder/blob/master/src/gui/NavBar.gd)\r\n\r\n## Licenses\r\n\r\n- All code is published under the [MIT license](https://github.com/SnoringCatGames/scaffolder/blob/master/LICENSE).\r\n- All art assets (files under `assets/images/`, `assets/music/`, and `assets/sounds/`) are published under the [CC0 1.0 Universal license](https://creativecommons.org/publicdomain/zero/1.0/deed.en).\r\n- This project depends on various pieces of third-party code that are licensed separately. [Here is a list of these third-party licenses](https://github.com/SnoringCatGames/scaffolder/blob/master/src/config/ScaffolderThirdPartyLicenses.gd).\r\n"},{"id":"squirrel-away","titleShort":"Point-and-click\nplatformer","titleLong":"Squirrel Away: A point-and-click platformer","urls":{"demo":"https://snoringcat.games/play/squirrel-away","github":"https://github.com/SnoringCatGames/squirrel-away"},"jobTitle":"","location":"","date":{"start":"2/2019","end":"5/2021"},"categories":["side-project","art","animation","music","app","godot","game","2D"],"images":[{"fileName":"squirrel-running.gif","description":"A squirrel running animation that Levi created. This was created by hand-drawing pixel art for each frame."},{"fileName":"cat-animation.gif","description":"A cat walking animation that Levi created. This was created using Godot's inverse kinematics and key frame APIs."},{"fileName":"surfaces-and-edges.png","description":"The Surfacer framework works pre-parsing a level into a \"platform graph\". The nodes are represented by points along the different surfaces in the level (floors, walls, and ceilings). The edges are represented by possible movement trajectories between points along surfaces."},{"fileName":"navigator-preselection.png","description":"A* search is used to find paths through the platform graph."}],"videos":[{"videoHost":"youtube","id":"2Q15fjAEncg","description":"A demonstration of the Surfacer framework in action. A cat is controlled by mouse clicks to navigate through a level of 2D platforms."}],"content":"_Squirrel Away is owned by [Snoring Cat LLC](https://snoringcat.games)._\r\n\r\nThis point-and-click platformer game showcases procedural pathfinding using the [Surfacer](https://github.com/SnoringCatGames/surfacer/) framework.\r\n\r\nIn this game, the user can click anywhere in the level, and the cat character will then jump, walk, and climb across platforms in order to reach that target destination.\r\n\r\n## Software used\r\n\r\n- [Surfacer](https://github.com/SnoringCatGames/surfacer/): A framework that enables procedural path-finding across 2D platforms.\r\n- [Scaffolder](https://github.com/SnoringCatGames/scaffolder/): A framework that provides some general app infrastructure.\r\n- [Godot](https://godotengine.org/): Game engine.\r\n- [Piskel](https://www.piskelapp.com/user/5663844106502144): Pixel-art image editor.\r\n- [Aseprite](https://www.aseprite.org/): Pixel-art image editor.\r\n- [Bfxr](https://www.bfxr.net/): Sound effects editor.\r\n- [DefleMask](https://deflemask.com/): Chiptune music tracker.\r\n\r\n## Licenses\r\n\r\n- All code is published under the [MIT license](https://github.com/SnoringCatGames/squirrel-away/blob/master/LICENSE).\r\n- All art assets (files under `assets/images/`, `assets/music/`, and `assets/sounds/`) are published under the [CC0 1.0 Universal license](https://creativecommons.org/publicdomain/zero/1.0/deed.en).\r\n- This project depends on various pieces of third-party code that are licensed separately. Here are lists of these third-party licenses:\r\n - [addons/scaffolder/src/config/ScaffolderThirdPartyLicenses.gd](https://github.com/SnoringCatGames/scaffolder/blob/master/src/config/ScaffolderThirdPartyLicenses.gd)\r\n - [addons/surfacer/src/config/SurfacerThirdPartyLicenses.gd](https://github.com/SnoringCatGames/surfacer/blob/master/src/config/SurfacerThirdPartyLicenses.gd)\r\n - [src/config/SquirrelAwayThirdPartyLicenses.gd](https://github.com/SnoringCatGames/squirrel-away/blob/master/src/config/SquirrelAwayThirdPartyLicenses.gd)\r\n"},{"id":"inner-tube-climber","titleShort":"2D platformer\nendless\nclimber","titleLong":"Inner-Tube Climber: A 2D platformer endless climber","urls":{"play-store":"https://play.google.com/store/apps/details?id=dev.levi.inner_tube_climber","app-store":"https://apps.apple.com/us/app/inner-tube-climber/id1553158659","ludum-dare":"https://ldjam.com/events/ludum-dare/47/stuck-in-an-inner-tube","github":"https://github.com/levilindsey/ludum-dare-47"},"jobTitle":"","location":"","date":{"start":"10/2020","end":"3/2021"},"categories":["side-project","art","animation","music","mobile","android","ios","app","godot","game","2D","game-jam"],"images":[{"fileName":"game-play.png","description":"The goal of the game is to climb upward by jumping from platform to platform. A key mechanic is the ability to bounce off of the walls in order to gain additional height."},{"fileName":"tuber-stuck.gif","description":"This animation of the character stuck in the snow and in their inner tube is shown at the start of the first level."},{"fileName":"main-menu.png","description":"The main menu shows an animation of the player sliding downhill on their inner tube."},{"fileName":"loading-screen.png","description":"The splash screen shown an animation of the player running."}],"videos":[{"videoHost":"youtube","id":"5ambx7K7Rjg","description":"Trailer for the mobile-app version of the game."},{"videoHost":"youtube","id":"QeTW9v1jYFg","description":"A demonstration of the gameplay for the original Ludum Dare version of the game."}],"content":"_The newer, mobile-app version of Inner-Tube Climber is owned by [Snoring Cat LLC](https://snoringcat.games)._\r\n\r\n_Oh no! The player is stuck inside a \"loop\"--that is, their inner tube--from a tragic inner-tubing accident that left them stuck in the bottom of an endless crevasse!_\r\n\r\nThis is an endless climber game, with fun wall bouncing!\r\n\r\n## Ludum Dare 47\r\n\r\n_**[Ludum Dare 47 submission](https://ldjam.com/events/ludum-dare/47/stuck-in-an-inner-tube/)**_\r\n\r\nThis game was originally a \"Compo\" submission for the [Ludum Dare 47 game jam](https://ldjam.com/events/ludum-dare/47/stuck-in-an-inner-tube/). All design, code, images, sound effects, and music were created by Levi in under 48 hours. This game rated in the 90th percentile for \"Overall\", \"Fun\", \"Graphics\", \"Audio\", and \"Humor\" (and in the 95th percentile for \"Fun\"!). The creation of this game was also [livestreamed on Twitch](https://www.twitch.tv/ukulelefury/videos).\r\n\r\nLevi later added a lot more polish and more features to the game and re-released it as a mobile app.\r\n\r\n### The theme: \"Stuck in a loop\"\r\n\r\nThe theme for the game jam was \"stuck in a loop\". This game addresses the theme in two ways:\r\n1. The player is stuck inside a \"loop\"--that is, their inner tube--from a tragic inner-tubing accident.\r\n2. The vertically-scrolling level has no end, it \"loops\" through previous platforms if you get far enough.\r\n\r\n## Software used\r\n\r\n- [Godot](https://godotengine.org/) was used to create this game.\r\n- [Aseprite](https://www.aseprite.org/) was used to create the images.\r\n- [Bfxr](https://www.bfxr.net/) was used to create the sound effects.\r\n- [DefleMask](https://deflemask.com/) was used to create the music.\r\n- [Trello](https://trello.com/b/GvuTgtRC/ludum-dare-47) was used for brainstorming and planning.\r\n\r\n## Licenses\r\n\r\n- The code is published under the [MIT license](LICENSE).\r\n- The art assets (files under `assets/images/`, `assets/music/`, and `assets/sfx/`) are published under the [CC0 1.0 Universal license](https://creativecommons.org/publicdomain/zero/1.0/deed.en).\r\n- This project depends on various pieces of third-party code that are licensed separately. [Here is a list of these third-party licenses](./docs/third-party-licenses.txt).\r\n"},{"id":"dark-time","titleShort":"Dark Time:\nChrome\nextension","titleLong":"Dark Time: Chrome extension","urls":{"chrome-web-store":"https://chrome.google.com/webstore/detail/dark-time/ofmngaeacndglijmheklbcnbjfdcohke","github":"https://github.com/levilindsey/dark-time"},"jobTitle":"","location":"","date":"2/2021","categories":["web","frontend","side-project","solo-work","chrome-web-store"],"images":[{"fileName":"thumbnail.png","description":"A simple clock logo."}],"videos":[],"content":"_Dark Time is owned by [Snoring Cat LLC](https://snoringcat.games)._\r\n\r\nDark Time provides a new-tab experience that shows your local time with dark background color and low-contract font. A non-jarring look is especially important for new-tab content, since this is shown every time you open a new tab!\r\n\r\nIf you'd instead prefer to customize the colors, Dark Time let's you do that! Just open up extension settings in the usual way through Chrome's UI."},{"id":"google-cloud-platform","titleShort":"Google:\nCloud\nPlatform","titleLong":"Google: Cloud Platform","urls":{"homepage":"https://cloud.google.com/"},"jobTitle":"Senior software Engineer","location":"Seattle, WA","date":{"start":"1/2017","end":"1/2021"},"categories":["work","web","frontend","angularjs","typescript","angular-2","google-closure-tools","rxjs","sql","teamwork"],"images":[{"fileName":"cloud-console-console-nav-panel.png","description":"This screenshot shows the console homepage and main navigation panel. As you can see, there is a lot of content in this app! Making navigation and discoverability work well are constant struggles with any app this large."},{"fileName":"cloud-console-panels.png","description":"This screenshot shows some of the other top-level panels in the app. Essentially every panel and every page are maintained by a different team within Google's massive Cloud organization, and getting them all to work together seamlessly is non-trivial!"}],"videos":[],"content":"_[Google][google-url] is a multinational corporation specializing in Internet-related services and products. [Google Cloud Platform][cloud-url] (GCP) is a cloud computing service that offers hosting on the same supporting infrastructure that Google uses internally for their end-user products. The Cloud Console is the web UI for end users of GCP._\r\n\r\nLevi was a tech lead on the frontend framework for the Cloud Console. There he pushed the limits of what Angular and Typescript can do in what is probably the world's largest Angular application.\r\n\r\nHighlights:\r\n\r\n- Helped lead the >4-year migration of >10M lines of JS code with >8M active-monthly users by >800 SWEs across >180 teams from AngularJS to Angular.\r\n- Tech lead (of 4 other SWEs) for Angular Infrastructure:\r\n - Bootstrapping the application\r\n - Client-side routing\r\n - Top-level UI layout\r\n - Angular migration strategies\r\n - Deferred-loading\r\n - Ensuring client infrastructure code health\r\n- Collaborated across teams to align priorities, review designs, and guide strategies.\r\n- Designed novel patterns for deferred-loading AoT-compiled Angular components.\r\n\r\n[google-url]: https://google.com/about\r\n[cloud-url]: https://cloud.google.com\r\n"},{"id":"ooboloo","titleShort":"2D platformer\ncollecting\nblobs","titleLong":"OoboloO: A 2D platformer with collecting and losing blobs","urls":{"demo":"https://levilindsey.itch.io/ooboloo","global-game-jam":"https://globalgamejam.org/2021/games/ooboloo-2","github":"https://github.com/levilindsey/global-game-jam-2021/"},"jobTitle":"","location":"","date":"1/2021","categories":["side-project","art","animation","music","app","godot","game","2D","game-jam","teamwork"],"images":[{"fileName":"screenshot_gameplay_1.png","description":"A screenshot showing gameplay of OoboloO."},{"fileName":"screenshot_gameplay_2.png","description":"A screenshot showing gameplay of OoboloO."},{"fileName":"screenshot_gameplay_3.png","description":"A screenshot showing gameplay of OoboloO."},{"fileName":"screenshot_gameplay_4.png","description":"A screenshot showing gameplay of OoboloO."},{"fileName":"cover_art.png","description":"OoboloO's fancy cover art!"},{"fileName":"ooboloo-game-over-screen.png","description":"OoboloO's game-over screen."}],"videos":[{"videoHost":"youtube","id":"qNUtp4FSwaY","description":"A demonstration of the gameplay for OoboloO."}],"content":"_You're a slime that's lost some of your blobs! Collect them and make your way to the exit while avoiding the vicious cave enemies - but be careful, as they are also your only means of maneuvering!_\r\n\r\nThis was made by:\r\n\r\n- Connie Wan\r\n- Daisy Muradyan\r\n- Levi Lindsey\r\n- Zaven Muradyan\r\n\r\n## Global Game Jam 2021\r\n\r\nThis game was created as a submission for [Global Game Jam 2021](https://globalgamejam.org/2021/games/ooboloo-2). All design, code, images, sound effects, and music were created in under 48 hours.\r\n\r\n### The theme: \"Lost and found\"\r\n\r\nThe theme for the game jam was \"lost and found\". This game addresses the theme by having the player collect little bits of themselves that are scattered around the level, but while doing so the player also loses bits of themselves.\r\n\r\n## Software used\r\n\r\n- Game engine: [Godot](https://godotengine.org/)\r\n- Art: [Krita](https://krita.org/)\r\n- Art: [Aseprite](https://www.aseprite.org/)\r\n- Music: [DefleMask](https://deflemask.com/)\r\n- Sound effects: [Bfxr](https://www.bfxr.net/)\r\n- Particle effects: [Pixel FX Designer](https://codemanu.itch.io/particle-fx-designer)\r\n"},{"id":"space-debris","titleShort":"Space\nflight\ngame","titleLong":"Space Debris: A space-flight simulation game","urls":{"demo":"https://levi.dev/space-debris","github":"https://github.com/levilindsey/space-debris"},"jobTitle":"","location":"","date":{"start":"6/2015","end":"9/2018","tieBreaker":1},"categories":["side-project","web","frontend","animation","app","dat.gui","gulp.js","webgl","es2015","game","3D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"This shows the starting screen for the space-debris game. You pilot a ship that is constantly surrounded by oncoming asteroids. You can either try to escape them or destroy them with your torpedoes."},{"fileName":"screenshot2.png","description":"This shows the ship flying and shooting torpedoes."}],"videos":[],"content":"_Fly through starscapes destroying space debris!_\r\n\r\nThis WebGL-based space-flight simulation video game is built on a collection of supporting libraries that Levi created:\r\n- [grafx][grafx]: A 3D graphics framework for WebGL\r\n- [physx][physx]: A physics engine with 3D rigid-body dynamics and collision detection (with impulse-based resolution)\r\n- [gamex][gamex]: A 3D WebGL-based game engine\r\n\r\n## Notable Features\r\n\r\n- A ton of cool features in supporting libraries—notably:\r\n - [grafx][grafx]: A 3D graphics framework for WebGL.\r\n - [physx][physx]: A physics engine with 3D rigid-body dynamics and collision detection (with impulse-based resolution).\r\n- An algorithm for calculating intercept velocity of B given the position and velocity of A and the position and speed of B.\r\n- Coordination between multiple [WebGL programs][webgl-program].\r\n- Procedurally generated asteroid shapes.\r\n- A procedurally generated starscape.\r\n- A user-controllable ship flying through space and shooting asteroids!\r\n- Rendering lat-long spherical textures over [tessellated][tesselation] icosahera.\r\n- A post-processing [bloom][bloom] shader.\r\n\r\n\r\n[gamex]: https://github.com/levilindsey/gamex\r\n[grafx]: https://github.com/levilindsey/grafx\r\n[physx]: https://github.com/levilindsey/physx\r\n\r\n[webgl-program]: https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram\r\n[tesselation]: https://en.wikipedia.org/wiki/Tessellation\r\n[bloom]: https://en.wikipedia.org/wiki/Bloom_(shader_effect)\r\n"},{"id":"game-engine","titleShort":"WebGL\ngraphics &\nphysics\nengine","titleLong":"A WebGL graphics framework, physics engine, and game engine","urls":{"demo":"https://levi.dev/dynamics","github":"https://github.com/levilindsey/gamex"},"jobTitle":"","location":"","date":{"start":"6/2015","end":"9/2018"},"categories":["side-project","web","frontend","animation","library","app","dat.gui","gulp.js","webgl","es2015","game","3D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"This shows a dynamics simulation of objects colliding into each other."},{"fileName":"screenshot2.png","description":"This shows a dynamics simulation of objects falling onto a flat surface as well as into each other."}],"videos":[],"content":"_A 3D WebGL-based game engine. Includes a 3D WebGL-based [graphics framework][grafx], a [physics engine][physx] with 3D rigid-body dynamics and collision detection (with impulse-based resolution), and miscellaneous other features that are commonly needed when creating a game._\r\n\r\n## Grafx: A 3D graphics framework for WebGL\r\n\r\n### Notable features\r\n\r\n- A system for defining 3D shapes, models, and controllers.\r\n- A system for configuring and drawing multiple simultaneous [WebGL programs][webgl-program].\r\n- A system for loading and compiling WebGL shaders and programs.\r\n- Support for both per-model and post-processing shaders. \r\n- A system for loading textures.\r\n- An animation framework.\r\n- A camera framework with built-in first-person and third-person cameras.\r\n- A collection of basic shape definitions, each with vertex position, normal, texture coordinate, and vertex indices configurations.\r\n- Algorithms for converting to and from a vertex-indexing array.\r\n- An algorithm for polygon [tesselation][tesselation].\r\n - This is used for subdividing all faces of a polygon into a parameterized number of triangles.\r\n - All of the resulting vertices can then be pushed out to a given radius in order to render a smoother sphere.\r\n- An algorithm for mapping spherical lat-long textures onto an icosahedron.\r\n - This involves careful consideration of the texture coordinates around the un-even seam of the icosahedron.\r\n\r\n## Physx: A physics engine with 3D rigid-body dynamics and collision detection (with impulse-based resolution).\r\n\r\n### Notable features\r\n\r\n- Includes continuous [collision detection][collision-detection] with [impulse-based resolution][collision-resolution].\r\n- [Decouples the physics simulation and animation rendering time steps][stable-time-steps], and uses a fixed timestep for the physics loop. This provides numerical stability and precise reproducibility.\r\n- Suppresses linear and angular momenta below a certain threshold.\r\n\r\nThe engine consists primarily of a collection of individual physics jobs and an update loop. This update loop is in turn controlled by the animation loop. However, whereas the animation loop renders each job once per frame loop—regardless of how much time actually elapsed since the previous frame—the physics loop updates its jobs at a constant rate. To reconcile these frame rates, the physics loop runs as many times as is needed in order to catch up to the time of the current animation frame. The physics frame rate should be much higher than the animation frame rate.\r\n\r\n### Collision Detection\r\n\r\nThis physics engine also includes a collision-detection pipeline. This will detect collisions between collidable bodies and update their momenta in response to the collisions.\r\n\r\n- Consists of an efficient broad-phase collision detection step followed by a precise narrow-phase step.\r\n- Calculates the position, surface normal, and time of each contact.\r\n- Calculates the impulse of a collision and updates the bodies' linear and angular momenta in response.\r\n- Applies Coulomb friction to colliding bodies.\r\n- Sub-divides the time step to more precisely determine when and where a collision occurs.\r\n- Supports multiple collisions with a single body in a single time step.\r\n- Efficiently supports bodies coming to rest against each other.\r\n- Bodies will never penetrate one another.\r\n- This does not address the [tunnelling problem][tunnelling-problem]. That is, it is possible for two fast-moving bodies to pass through each other as long as they did not intersect each other during any time step.\r\n- This only supports collisions between certain types of shapes. Fortunately, this set provides reasonable approximations for most other shapes. The supported types of shapes are:\r\n - [sphere][sphere]\r\n - [capsule][capsule]\r\n - [AABB][aabb]\r\n - [OBB][obb]\r\n\r\n\r\n[grafx]: https://github.com/levilindsey/grafx\r\n[physx]: https://github.com/levilindsey/physx\r\n\r\n[webgl-program]: https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram\r\n[tesselation]: https://en.wikipedia.org/wiki/Tessellation\r\n\r\n[collision-detection]: https://en.wikipedia.org/wiki/Collision_detection\r\n[collision-resolution]: https://en.wikipedia.org/wiki/Collision_response#Impulse-based_contact_model\r\n[stable-time-steps]: https://gafferongames.com/post/fix_your_timestep/\r\n[tunnelling-problem]: https://www.aorensoftware.com/blog/2011/06/01/when-bullets-move-too-fast/\r\n[sphere]: https://en.wikipedia.org/wiki/Sphere\r\n[capsule]: https://en.wikipedia.org/wiki/Capsule_(geometry)\r\n[aabb]: https://en.wikipedia.org/w/index.php?title=Axis-aligned_bounding_box&redirect=no\r\n[obb]: https://en.wikipedia.org/w/index.php?title=Oriented_bounding_box&redirect=no\r\n"},{"id":"google-greentea","titleShort":"Google:\nGreentea","titleLong":"Google: Greentea","urls":{"homepage":"https://google.com/about"},"jobTitle":"Software Engineer","location":"Mountain View, CA","date":{"start":"4/2015","end":"12/2016"},"categories":["work","web","frontend","angularjs","dart","angular-2","teamwork"],"images":[{"fileName":"greentea-public-flutter-demo-screenshot.png","description":"This is a public demo screenshot of the Flutter (mobile) version of the Greentea app."}],"videos":[],"content":"_[Google][google-url] is a multinational corporation specializing in Internet-related services and products. [Greentea][greentea-url] is an internal customer relationship management (CRM) application that is used by Google sales teams._\r\n\r\nLevi owned of a large portion of the codebase. He filled a leadership and mentorship role on his team. He helped migrate the codebase to Angular 2.\r\n\r\n[google-url]: https://google.com/about\r\n[greentea-url]: http://angularjs.blogspot.com/2015/11/how-google-uses-angular-2-with-dart.html\r\n"},{"id":"aldenwitt.com","titleShort":"Portfolio\nwebsite:\nSongwriter","titleLong":"Portfolio for a songwriter","urls":{"homepage":"http://levi.dev/aldenwitt.com","github":"https://github.com/levilindsey/aldenwitt.com"},"jobTitle":"","location":"Nashville, TN","date":{"start":"7/2016","end":"10/2016"},"categories":["side-project","web","website","frontend","angular-2","gulp.js","typescript","animation","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Alden Witt's homepage is a polaroid. His website aims for realism and shows an envelope, polaroid, and napkin animating over a notepad on a rough wooden desk."},{"fileName":"screenshot2.png","description":"Alden Witt's Contact page is a napkin."},{"fileName":"screenshot3.png","description":"Alden Witt's Bio page is an envelope."},{"fileName":"screenshot4.png","description":"aldenwitt.com is also responsive for mobile screens."}],"videos":[],"content":"This website was a professional portfolio for songwriter Alden Witt, \"the best unsigned writer in Nashville.\" It has since been deprecated.\r\n\r\nThis website uses Angular 2 with Typescript. It also makes use of my custom animation framework for sliding the different pages in and out of view.\r\n\r\n[main-url]: https://levi.dev/aldenwitt.com"},{"id":"idean","titleShort":"Idean","titleLong":"Idean","urls":{"homepage":"http://idean.com"},"jobTitle":"UI Developer","location":"Palo Alto, CA","date":{"start":"3/2014","end":"3/2015"},"categories":["work","web","frontend","back-end","user-experience","mean-stack","angular","node.js","mongodb","express","gulp.js","grunt","d3.js","svg","php","teamwork"],"images":[{"fileName":"idean-hawaii.jpg","description":"Hawaii!! Idean flew their employees to Oahu for their annual retreat."},{"fileName":"palo-alto-studio.jpg","description":"Idean's office at their head quarters in Palo Alto. The home-like atmosphere of the house definitely reflects the friendly and relaxed atmosphere of the company as a whole."},{"fileName":"idean-definition.png","description":"Life's too short for crappy UX!"}],"videos":[{"videoHost":"vimeo","id":"50565896","description":"Idean's introductory video: why a good user experience is important and how Idean can help."},{"videoHost":"vimeo","id":"88468432","description":"Idean's UX Rap!"}],"content":"_[Idean][main-url] is a global design agency dedicated to delivering the best possible User Experience._\r\n\r\nLevi used JavaScript to create awesome user experiences across many different projects for many different clients. Unfortunately, all of the projects he worked on are under NDAs, so he cannot disclose too much detail about any of them.\r\n\r\n### Project A: Enterprise Security-Management Application\r\n\r\nLevi developed a high-fidelity prototype for the client’s enterprise security-management application. This included an AngularJS framework with an intricate frontend routing mechanism.\r\n\r\n_Front and back end: Node.js, AngularJS, MongoDB, Gulp, SASS_\r\n\r\n### Project B: Enterprise Storage-Management Application\r\n\r\nLevi developed a first-iteration of the frontend for the client's enterprise storage-management application. This included a complex SVG-based workspace in which resources were graphically configured within tree structures.\r\n\r\n_Front end: AngularJS, Gulp, SVG, SASS_\r\n\r\n### Project C: Analytics Dashboard\r\n\r\nLevi developed a web portal for analyzing data collected from mobile applications. This included highly configurable data visualizations.\r\n\r\n_Front end: AngularJS, D3.js, Gulp, SASS_\r\n\r\n### Project D: Web Portal for SDK Specifications\r\n\r\nLevi developed a web portal for displaying the specifications of the client's RESTful API. This included the ability to test out and tweak each of the different API calls directly from the portal. Levi also created a simple test server for handling the requests.\r\n\r\n_Front and back end: Node.js, AngularJS, MongoDB, Gulp, SASS_\r\n\r\n### Project E: Checkout Wizard\r\n\r\nLevi helped develop a complex checkout wizard system for the client's translation service. This involved many custom widgets for each page in addition to complex routing logic. This also required interfacing with server APIs to update the quote data after each step.\r\n\r\n_Front end: AngularJS, Gulp, SASS_\r\n\r\n### Project E: Content Management System\r\n\r\nLevi developed the full-stack infrastructure for a content management system.\r\n\r\n_Front and back end: Node.js, AngularJS, MongoDB, Grunt, SASS_\r\n\r\n### Project F: WordPress site\r\n\r\nLevi updated and maintained the client's pre-existing WordPress website.\r\n\r\n### Project G: Squarespace site\r\n\r\nLevi customized the client's pre-existing Squarespace website.\r\n\r\n[main-url]: http://idean.com"},{"id":"benlindseydesign.com","titleShort":"Portfolio\nwebsite:\nDesigner","titleLong":"Portfolio for an Industrial Designer","urls":{"homepage":"http://benlindseydesign.com","github":"https://github.com/levilindsey/benlindseydesign.com"},"jobTitle":"","location":"Portland, OR","date":{"start":"12/2014","end":"1/2015"},"categories":["side-project","web","website","frontend","angular","gulp.js","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Ben's portfolio website is still a work in progress, but this is what the layout of the homepage looks like at the moment."}],"videos":[],"content":"This website is a professional portfolio interaction designer for Ben Lindsey.\r\n\r\n[main-url]: http://benlindseydesign.com"},{"id":"hex-grid","titleShort":"Hex Grid","titleLong":"Hex Grid","urls":{"demo":"https://levi.dev/hex-grid","github":"https://github.com/levilindsey/hex-grid"},"jobTitle":"","location":"","date":{"start":"7/2014","end":"1/2015"},"categories":["side-project","web","frontend","svg","canvas","animation","library","app","dat.gui","gulp.js","2D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"This shows the hex-grid layout containing many of Levi's posts at random positions. This also captures a \"lightning\" animation in progress."},{"fileName":"screenshot2.png","description":"This shows the grid layout in its expanded form. After clicking on a tile with a post, the grid expands around that tile in order to show an enlarged panel with details about the post."},{"fileName":"screenshot3.png","description":"A dat.GUI menu makes most of the parameters in hex-grid dynamically configurable by the user. This is great for debugging, tuning, and playing around."},{"fileName":"screenshot4.png","description":"This shows the hex-grid layout containing many of Levi's posts at random positions."},{"fileName":"hg-sector-expansion-1.png","description":"This illustrates how the grid is expanded in order to show an enlarged area with details for a given tile. The grid is divided into six sectors around the given tile. These are then each translated in a different direction."},{"fileName":"hg-sector-expansion-2.png","description":"This illustrates which tile positions lie within the viewport after both the grid has been expanded and panning has occurred in order to center the viewport on the expanded tile. This also illustrates where new tiles will need to be created in order to not show gaps within the expanded grid."},{"fileName":"hg-indices.png","description":"This illustrates how the hex-grid system stores data for three different types of tile relationships. For each of these relationships, both the vertical and horizontal grid configurations are illustrated."}],"videos":[],"content":"#### A dynamic, expandable, animated grid of hexagonal tiles for displaying posts\r\n\r\nLevi was bored with the standard grid layout and wanted to play with particle systems and crazy animations. So he made hex-grid.\r\n\r\n## Features\r\n\r\nSome features of this package include:\r\n\r\n- A particle system complete with neighbor and anchor position spring forces.\r\n- An assortment of **persistent** animations that make the grid _exciting to watch_.\r\n- An assortment of **transient** animations that make the grid _exciting to interact with_.\r\n- A control panel that enables you to adjust most of the many different parameters of this system.\r\n- The ability to display custom collections of posts.\r\n - These posts will be displayed within individual tiles.\r\n - These tile posts can be expanded for more information.\r\n - The contents of these posts use standard [Markdown syntax][markdown-url], which is then parsed by the system for displaying within the grid.\r\n\r\n## The Tile-Expansion Algorithm\r\n\r\nThe following diagrams help to visualize how the grid is expanded.\r\n\r\n### A Basic Sector Expansion\r\n\r\nThis image illustrates how the grid is expanded in order to show an enlarged area with details for a given tile. The grid is divided into six sectors around the given tile. These are then each translated in a different direction.\r\n\r\n![Basic sector expansion][sector-expansion-1-image]\r\n\r\n### Sector Expansion with Viewport Panning and Creating New Tiles\r\n\r\nThis image illustrates which tile positions lie within the viewport after both the grid has been expanded and panning has occurred in order to center the viewport on the expanded tile. This also illustrates where new tiles will need to be created in order to not show gaps within the expanded grid.\r\n\r\n![Basic sector expansion with panning and new tiles][sector-expansion-2-image]\r\n\r\n### A Reference for how Neighbor Tile and Sector Data is Stored and Indexed\r\n\r\nThis image illustrates how the hex-grid system stores data for three different types of tile relationships. For each of these relationships, both the vertical and horizontal grid configurations are illustrated.\r\n\r\nEach tile holds a reference to each of its neighbor tiles. These references are stored in an array that is indexed according to the position of the neighbor tiles relative to the given tile. The left-most images show which positions correspond to which indices.\r\n\r\nThe expanded grid holds an array with references to each of the six sectors. The middle images show which sectors correspond to which indices.\r\n\r\nA sector stores references to its tiles within a two-dimensional array. The right-most images show how this two-dimensional array is indexed.\r\n\r\n![Reference for how neighbor tile and sector data is stored and indexed][indices-image]\r\n\r\n## Acknowledgements / Technology Stack\r\n\r\nThe following packages/libraries/projects were used in the development of hex-grid:\r\n\r\n- [Gulp.js][gulp-url]\r\n- [Bower][bower-url]\r\n- [dat.gui][dat-gui-url]\r\n- [Showdown][showdown-url]\r\n- Additional packages that are available via [NPM][npm-url] (these are listed within the `package.json` file)\r\n\r\n\r\n[sector-expansion-1-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/hex-grid/hg-sector-expansion-1.png\r\n[sector-expansion-2-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/hex-grid/hg-sector-expansion-2.png\r\n[indices-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/hex-grid/hg-indices.png\r\n\r\n[markdown-url]: http://daringfireball.net/projects/markdown/\r\n[dat-gui-url]: http://code.google.com/p/dat-gui\r\n[gulp-url]: http://gulpjs.com\r\n[bower-url]: http://bower.io\r\n[npm-url]: https://npmjs.org\r\n[showdown-url]: https://github.com/showdownjs/showdown\r\n"},{"id":"generator-meanie","titleShort":"MEAN-stack\ngenerator","titleLong":"MEAN-stack Yeoman generator with Gulp","urls":{"npm":"https://npmjs.org/package/generator-meanie","github":"https://github.com/levilindsey/generator-meanie"},"jobTitle":"","location":"","date":{"start":"5/2014","end":"12/2014"},"categories":["side-project","web","mean-stack","frontend","back-end","mongodb","express","angular","node.js","gulp.js","yeoman","npm","library","solo-work"],"images":[{"fileName":"screenshot4.png","description":"This Yeoman generator includes many prompts that help to customize the boilerplate of your application."},{"fileName":"screenshot1.png","description":"The generated project includes gulp tasks to help tackle many common build problems. Each gulp task is separated into its own individual file and then included by the main gulpfile.js. This obeys the SRP and helps to keep things modular."},{"fileName":"screenshot3.png","description":"The generated project includes a frontend file structure that closely follows the Best Practice Recommendations for Angular App Structure, but with a few additional logical sub-divisions."},{"fileName":"screenshot2.png","description":"The generated project includes a server file structure that has been separated into distinct functional blocks."}],"videos":[],"content":"[![License Status][license-image]][license-url]\r\n[![NPM version][npm-image]][npm-url]\r\n[![Downloads Status][downloads-image]][downloads-url]\r\n[![Build Status][travis-image]][travis-url]\r\n[![Dependency Status][depstat-image]][depstat-url]\r\n[![Flattr this git repo][flattr-image]][flattr-url]\r\n\r\n_[MEAN stack][mean-url] generator for [Yeoman][yeoman-url] with [gulp][gulp-url]. Follows the [Best Practice Recommendations for Angular App Structure][angular-best-practices-url], and, in general, attempts to follow best practices throughout._\r\n\r\n## What this is\r\n\r\n- **Modular**: The main goal of this generator is to create a highly componentized file structure for both [frontend][angular-best-practices-url] and server-side code. This helps to keep your code modular, scalable, and easier to understand.\r\n- **Gulp tasks**: This includes a wide array of gulp tasks for optimizing frontend performance and streamlining your development process.\r\n- **App infrastructure**: This creates a comprehensive boilerplate infrastructure for a end-to-end web application using the MEAN stack. This likely includes some extra bells and whistles that you may not want to include in your particular app. The goal of this project is to promote development through _subtractive_ synthesis. What this means is that, hopefully, this generator creates infrastructure that will handle most of the high-level problems in your web app, in addition to providing some other common features that you will likely remove.\r\n- **Tests**: This includes a testing infrastructure using the [Karma][karma-url] test runner and the [Jasmine][jasmine-url] test framework for testing the frontend code.\r\n- **SASS**: This uses the [SASS][sass-url] stylesheet language.\r\n- **UI-Router**: This uses the [UI-Router][ui-router-url] library for more powerful frontend routing and state management in Angular.\r\n\r\n## Why use this generator instead of one of the many other options?\r\n\r\nMaybe you shouldn't! Check out the file structure, the gulp tasks, and the various libraries and tools that are used in this project. If these are all aspects that you agree with, then please try this generator out! Otherwise, there are many other great generators out there for you to use. Addy Osmani has an [excellent article][addy-osmani-url] describing MEAN-stack development and a quick survey of some of the more popular generators and boilerplate options for it. Each of these options have different benefits and each option uses a different set of tools.\r\n\r\n## How to use it\r\n\r\n```bash\r\nnpm install -g generator-meanie\r\nyo meanie\r\n```\r\n\r\nSee the [getting set up guide][getting-set-up-url] for a step-by-step walkthrough for setting things up and\r\nrunning.\r\n\r\n## Technology stack / acknowledgements\r\n\r\nThis project uses technology from a number of third-parties. These technologies include:\r\n\r\n- [Node.js][node-url]\r\n- [AngularJS][angular-url]\r\n- [MongoDB][mongo-url]\r\n- [gulp.js][gulp-url]\r\n- [SASS][sass-url]\r\n- [Yeoman][yeoman-url]\r\n- [Git][git-url]\r\n- Numerous other packages that are available via [NPM][npm-url] (these are listed within the [`package.json`][package.json-url] file)\r\n\r\n## Background\r\n\r\nThis project is an on-going effort to collect common patterns and processes for developing web apps using the MEAN stack and gulp. It is constantly evolving and gaining new features.\r\n\r\nThe contents of this project is strongly opinionated. This is all code that was originally developed and tested by Levi for his own personal use. That being said, it works great for him, so it will probably work great for you too!\r\n\r\nFeedback, bug reports, feature requests, and pull requests are very welcome!\r\n\r\n## Next steps\r\n\r\nSee the [project roadmap][roadmap-url] for Levi's future plans for this generator.\r\n\r\n\r\n[flattr-url]: https://flattr.com/submit/auto?user_id=levisl176&url=github.com/levilindsey/generator-meanie&title=generator-meanie&language=javascript&tags=github&category=software\r\n[flattr-image]: http://api.flattr.com/button/flattr-badge-large.png\r\n\r\n[npm-url]: https://npmjs.org/package/generator-meanie\r\n[npm-image]: http://img.shields.io/npm/v/generator-meanie.svg?style=flat-square\r\n[npm-image-old]: https://badge.fury.io/js/generator-meanie.png\r\n\r\n[travis-url]: https://travis-ci.org/levisl176/generator-meanie\r\n[travis-image]: http://img.shields.io/travis/levisl176/generator-meanie/master.svg?style=flat-square\r\n[travis-image-old]: https://secure.travis-ci.org/levisl176/generator-meanie.png?branch=master\r\n\r\n[coveralls-url]: https://coveralls.io/r/levisl176/generator-meanie\r\n[coveralls-image]: http://img.shields.io/coveralls/levisl176/generator-meanie/master.svg?style=flat-square\r\n[coveralls-image-old]: https://img.shields.io/coveralls/levisl176/generator-meanie.svg?style=flat\r\n\r\n[depstat-url]: https://david-dm.org/levisl176/generator-meanie\r\n[depstat-image]: http://img.shields.io/david/levisl176/generator-meanie.svg?style=flat-square\r\n[depstat-image-old]: https://david-dm.org/levisl176/generator-meanie.svg\r\n\r\n[license-url]: https://github.com/levilindsey/generator-meanie/blob/master/LICENSE\r\n[license-image]: http://img.shields.io/npm/l/generator-meanie.svg?style=flat-square\r\n\r\n[downloads-url]: https://npmjs.org/package/generator-meanie\r\n[downloads-image]: http://img.shields.io/npm/dm/generator-meanie.svg?style=flat-square\r\n\r\n[getting-set-up-url]: https://github.com/levilindsey/generator-meanie/blob/master/docs/getting-set-up.md\r\n[roadmap-url]: https://github.com/levilindsey/generator-meanie/blob/master/docs/roadmap.md\r\n[package.json-url]: https://github.com/levilindsey/generator-meanie/blob/master/package.json\r\n[bower.json-url]: https://github.com/levilindsey/generator-meanie/blob/master/bower.json\r\n\r\n[angular-best-practices-url]: https://docs.google.com/document/d/1XXMvReO8-Awi1EZXAXS4PzDzdNvV6pGcuaF4Q9821Es/pub\r\n[mean-url]: http://en.wikipedia.org/wiki/MEAN\r\n[yeoman-url]: http://yeoman.io/\r\n[gulp-url]: http://gulpjs.com/\r\n[node-url]: http://nodejs.org/\r\n[angular-url]: https://angularjs.org/\r\n[mongo-url]: https://mongodb.org/\r\n[sass-url]: http://sass-lang.com/\r\n[git-url]: http://git-scm.com/\r\n[npm-url]: http://npmjs.org/\r\n[bower-url]: http://bower.io/\r\n[traceur-url]: https://github.com/google/traceur-compiler\r\n\r\n[karma-url]: http://karma-runner.github.io/0.12/index.html\r\n[jasmine-url]: http://jasmine.github.io/2.0/introduction.html\r\n[protractor-url]: http://angular.github.io/protractor/#/\r\n[mocha-url]: http://mochajs.org/\r\n[chai-url]: http://chaijs.com/\r\n[sinon-url]: http://sinonjs.org/\r\n\r\n[ui-router-url]: https://github.com/angular-ui/ui-router\r\n[passport-url]: http://passportjs.org/\r\n\r\n[addy-osmani-url]: http://addyosmani.com/blog/full-stack-javascript-with-mean-and-yeoman/"},{"id":"shouldihaveanother.beer","titleShort":"Beer?:\nWeb doodle","titleLong":"shouldihaveanother.beer: A web doodle","urls":{"demo":"http://shouldihaveanother.beer","github":"https://github.com/levilindsey/shouldihaveanother.beer"},"jobTitle":"","location":"","date":"10/2014","categories":["side-project","web","website","doodle","art","frontend","canvas","animation","gulp.js","tiny","2D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"This simple web app features randomized delicious beer colors, and delightful animated carbonation."},{"fileName":"screenshot2.png","description":"This simple web app features randomized delicious beer colors, and delightful animated carbonation."},{"fileName":"screenshot3.png","description":"This simple web app features randomized delicious beer colors, and delightful animated carbonation."}],"videos":[],"content":"Levi built and deployed this simple app in a few hours. It exhibits some fun canvas-based animation.\r\n\r\nHis main motivation was the availability of fun new top-level domains.\r\n\r\nThis simple web app features randomized delicious beer colors, and delightful animated carbonation.\r\n\r\n\r\n[main-url]: http://shouldihaveanother.beer"},{"id":"text-animation","titleShort":"Text\nanimation","titleLong":"Text animation","urls":{"bower":"http://bower.io/search/?q=text-animation","demo":"https://levi.dev/text-animation","github":"https://github.com/levilindsey/text-animation","codepen":"http://codepen.io/levisl176/pen/HGJdF"},"jobTitle":"","location":"","date":"7/2014","categories":["side-project","web","frontend","animation","library","bower","gulp.js","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Text falling into place with a shadow effect."},{"fileName":"screenshot2.png","description":"Text sliding into place."},{"fileName":"screenshot3.png","description":"Text swirling into place."},{"fileName":"screenshot4.png","description":"Text rolling into place."}],"videos":[],"content":"#### Character-by-character animation of text\r\n\r\nThis text-animation package makes it easy to animate the text of any collection of HTML elements. With this package, each character animates individually, and it is simple to customize this animation.\r\n\r\nThis package is available in the Bower registry as [`text-animation`][bower-url].\r\n\r\n### The In-Order Animation Algorithm\r\n\r\n1. Iterate through each descendant node in the root element's DOM structure \r\n a. This uses a pre-order tree traversal\r\n b. Store the text of each text node along with the parent element and next sibling node associated with the text node\r\n c. Fix each descendant element with its original dimensions\r\n d. Empty out all text nodes\r\n2. Iterate through each character and animate them \r\n a. This is now a simple linear iteration, because we flattened the DOM structure in our earlier traversal \r\n b. Animate the character \r\n 1. Add the character to a span \r\n 2. Insert the span into the character's parent element \r\n a. If the original text node has a next sibling node, then insert this span before that node \r\n b. Otherwise, append this node to the end of the original text node's parent node \r\n 4. Run the actual animation of the isolated character \r\n c. Finish animating the character \r\n 1. Remove the span \r\n 2. Concatenate the character back into the original text node \r\n\r\nThe following three representations of the same DOM structure may help to understand how this algorithm flattens and stores the DOM representation.\r\n\r\n#### Original HTML Representation\r\n\r\n \r\n H\r\n

\r\n e\r\n

\r\n y\r\n
\r\n D\r\n

\r\n O\r\n

\r\n M\r\n
\r\n !\r\n \r\n\r\n#### Visual Tree Representation\r\n\r\n :Element\r\n ________________________________|________________________________\r\n / / | \\ \\\r\n H:TextNode

:Element y:TextNode

:Element !:TextNode\r\n | _______________|_______________\r\n e:TextNode / | \\\r\n D:TextNode

:Element M:TextNode\r\n |\r\n O:TextNode\r\n\r\n#### JavaScript Object Structure of Text Nodes\r\n\r\n [\r\n {\"parentElement\": , \"nextSiblingNode\":

, \"text\": \"H\"},\r\n {\"parentElement\":

, \"nextSiblingNode\": null, \"text\": \"e\"},\r\n {\"parentElement\": , \"nextSiblingNode\":

, \"text\": \"y\"},\r\n {\"parentElement\":
, \"nextSiblingNode\":

, \"text\": \"D\"},\r\n {\"parentElement\":

, \"nextSiblingNode\": null, \"text\": \"O\"},\r\n {\"parentElement\":

, \"nextSiblingNode\": null, \"text\": \"M\"},\r\n {\"parentElement\": , \"nextSiblingNode\": null, \"text\": \"!\"}\r\n ]\r\n\r\n\r\n[main-url]: https://levi.dev/text-animation\r\n[codepen-url]: http://codepen.io/levisl176/full/HGJdF\r\n[bower-url]: http://bower.io/search/?q=text-animation"},{"id":"anvato","titleShort":"Anvato","titleLong":"Anvato","urls":{"homepage":"http://anvato.com"},"jobTitle":"Software Engineer","location":"Mountain View, CA","date":{"start":"10/2013","end":"3/2014"},"categories":["work","web","frontend","android","java","roku","windows-phone","c-sharp","video","teamwork"],"images":[{"fileName":"screenshot1.png","description":"The Anvato web video player."},{"fileName":"screenshot4.png","description":"The Anvato web video player with the preview ribbon open while the user scrubs along the seek bar."},{"fileName":"screenshot5.png","description":"The Anvato web video player with the preview popup open while the user hovers over the seek bar."},{"fileName":"screenshot3.png","description":"The Anvato web video player with the volume control expanded."},{"fileName":"screenshot2.png","description":"The splash screen for the Anvato web video player."}],"videos":[],"content":"_[Anvato][main-url] provides live and on-demand video management, analytics, syndication, and tracking features along with video player SDKs for iOS, Android, and web._\r\n\r\nLevi developed HTTP Live Streaming video-player SDKs for the HTML5, Windows Store, Windows Phone, Roku, and Android platforms to match detailed specifications from clients including Fox, NBC, and Univision.\r\n\r\nThe HTML5 player SDK loaded and played four-times faster than that of Anvato’s leading competitor.\r\n\r\nThe combined Anvato player SDKs drew more than 528,000 viewers per minute during the 2014 SuperBowl, making it the most-viewed single sports event ever delivered online.\r\n\r\nAnvato was acquired by Google in 2016.\r\n\r\n[main-url]: http://anvato.com"},{"id":"fat-cat-chat","titleShort":"IRC-like\nchat","titleLong":"Fat Cat Chat","urls":{"demo":"https://levi.dev/fat-cat-chat","github":"https://github.com/levilindsey/fat-cat-chat"},"jobTitle":"","location":"","date":"2/2014","categories":["side-project","web","app","art","web-sockets","socket.io","node.js","express","back-end","frontend","solo-work"],"images":[{"fileName":"screenshot5.png","description":"All of the users currently in a chat room are shown in the right-side panel. Your user name is shown with an asterisk. Hyperlinks can be included within chat messages either by entering a valid URL—including the protocol—or by using the special link command."},{"fileName":"screenshot1.png","description":"Private chats are shown in a small, collapsible panel at the bottom of the screen. The other sections of the page are included within accordian-style, collapsible panels."},{"fileName":"screenshot4.png","description":"Each client is kept up-to-date with a manifest of all current rooms and users. Clicking on either a room name or a user name, in any context, will open the corresponding public or private panel, respectively."},{"fileName":"screenshot6.png","description":"There are three buttons at the top of the screen that let you change your chat name, create new rooms, and add new bots to chat with. The bots will send you emoticons, links to funny cat GIFs, and random cat facts."},{"fileName":"screenshot2.png","description":"This app includes a collection of cat-themed emoticons!"},{"fileName":"screenshot3.png","description":"This app includes many standard IRC commands."}],"videos":[],"content":"#### An IRC-like chat server and web client application\r\n\r\nLevi built this app in order to hone his server-side skills and to learn about web sockets.\r\n\r\n## Features\r\n\r\nSome features of this chat application include:\r\n\r\n- Private and room chat areas\r\n- Bots to chat with in case your feeling lonely\r\n- Syntax highlighting and link injection for user names, room names, and command names\r\n- The ability to include links in chat messages\r\n- Custom cat-themed emoticons\r\n- Numerous fun facts and [GIFS][cat-gif-url] about cats\r\n- Many commands including:\r\n - `/help`\r\n - `/rooms`\r\n - `/join`\r\n - `/msg`\r\n - `/nick`\r\n - `/ping`\r\n - `/ignore`\r\n - `/leave`\r\n - `/quit`\r\n - `/link`\r\n\r\n## Acknowledgements / Technology Stack\r\n\r\nThe technology stack for this project includes:\r\n\r\n- [Node.js][node-url]\r\n- [Socket.IO][socket-io-url]\r\n- HTML5/CSS3/JavaScript\r\n\r\n\r\n[main-url]: https://levi.dev/fat-cat-chat\r\n[cat-gif-url]: http://25.media.tumblr.com/0798843644c862737ce1258821b5938a/tumblr_mnba38vUWI1qzcv7no1_400.gif\r\n[node-url]: http://nodejs.org/\r\n[socket-io-url]: http://socket.io/"},{"id":"photo-viewer","titleShort":"Photo viewer","titleLong":"Photo viewer","urls":{"demo":"https://levi.dev/photo-viewer","github":"https://github.com/levilindsey/photo-viewer"},"jobTitle":"","location":"","date":"2/2014","categories":["side-project","web","frontend","app","xhr2","animation","solo-work"],"images":[{"fileName":"screenshot3.png","description":"The images for each of the different categories are contained within their own panels. These collapse and expand in an accordian style (only one panel is expanded at a time)."},{"fileName":"screenshot5.png","description":"Clicking on a image thumbnail causes a lightbox to be shown containing a larger version of that thumbnail image."},{"fileName":"screenshot8.png","description":"The image lightbox includes overlay controls for exiting, entering full-screen mode, and navigating to the previous and next images"},{"fileName":"screenshot6.png","description":"The image within a lightbox can be expanded for full-screen viewing. As the higher-resolution image loads, a progress circle is shown with the current image download progress and a phantom background of the mid-sized version of the image."},{"fileName":"screenshot7.png","description":"An image in full-screen mode."},{"fileName":"screenshot1.png","description":"As individual images, or the collection metadata, is loading, a custom progress circle is shown."},{"fileName":"screenshot2.png","description":"After the collection metadata has loaded, the different image categories become available for selection."},{"fileName":"screenshot4.png","description":"The images for each of the different categories are contained within their own panels. These collapse and expand in an accordian style (only one panel is expanded at a time)."},{"fileName":"screenshot9.png","description":"Levi created a stand-alone version of the collapsible grid that was used for displaying the image thumbnails. This grid includes a spring dynamic that creates an interesting animation as the grid collapses of expands."}],"videos":[],"content":"#### A general-purpose photo-viewer application\r\n\r\nThis fancy web app is loaded with bells and whistles. A smattering of them includes:\r\n\r\n- Expandable, animated photo grids, which display large collections of photo thumbnails\r\n- A lightbox for conveniently viewing and navigating through medium-sized versions of the images\r\n- Full-screen mode for viewing and navigating through the original, full-sized versions of the images\r\n- A very flashy SVG-based progress circle\r\n- Use of the XHR2 progress event for displaying the download progress of larger images\r\n\r\nThis app was originally built to show off the photos from Levi and [Jackie's][jackie-url] wedding.\r\n\r\n\r\n[main-url]: https://levi.dev/wedding/photos\r\n[jackie-url]: http://jackieandlevi.com/jackie"},{"id":"progress-circle","titleShort":"Progress\ncircle:\nWeb doodle","titleLong":"Progress circle: A web doodle","urls":{"demo":"https://levi.dev/progress-circle","github":"https://github.com/levilindsey/progress-circle","codepen":"http://codepen.io/levisl176/pen/ndklu"},"jobTitle":"","location":"","date":"2/2014","categories":["side-project","web","doodle","art","frontend","svg","tiny","animation","2D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"The progress circle with its larger radius. The dots revolve in a clockwise direction while the colors of the dots transition in such a way to make the colors appear to transition in a counter-clockwise direction."},{"fileName":"screenshot2.png","description":"The progress circle with its smaller radius. The lightness increases as the radius decreases."}],"videos":[],"content":"#### A progress circle built using SVG\r\n\r\nThe progress circle consists of a ring of color-shifting dots.\r\n\r\nSpecifically, the dots revolve in a clockwise direction while the colors of the dots transition in such a way to make the colors appear to transition in a counter-clockwise direction.\r\n\r\nThis project uses a separate custom animation package Levi developed.\r\n\r\n\r\n[main-url]: https://levi.dev/progress-circle\r\n[codepen-url]: http://codepen.io/levisl176/pen/ndklu"},{"id":"metabounce","titleShort":"Metabounce:\nWeb doodle","titleLong":"Metabounce: A web doodle","urls":{"demo":"https://levi.dev/metabounce","github":"https://github.com/levilindsey/metabounce","codepen":"http://codepen.io/levisl176/pen/bkmpE"},"jobTitle":"","location":"","date":"1/2014","categories":["side-project","web","doodle","art","frontend","svg","animation","2D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Both the inner and main balls grow and eventually pop with delightful animations. A popping ball will push—and possibly pop—neighboring balls."},{"fileName":"screenshot2.png","description":"The balls can be rendered with transparency and multiple gradients that make them resemble iridescent bubbles."},{"fileName":"screenshot3.png","description":"This app includes many parameters that adjust things like: bounce squish intensity, gravity, drag, pop force thresholds, ball growth rates, pop neighbor displacement power, nested balls, ball colors, ball count, ball size, etc."},{"fileName":"screenshot4.png","description":"This app includes many parameters that adjust things like: bounce squish intensity, gravity, drag, pop force thresholds, ball growth rates, pop neighbor displacement power, nested balls, ball colors, ball count, ball size, etc."}],"videos":[],"content":"#### Fun with SVG, balls, and bouncing!\r\n\r\nThis is a project in which Levi played around with balls and bouncing. This is just for fun, so enjoy!\r\n\r\nThe color of each ball is always shifting to another random value. At any given point, the hue, saturation, and lightness components of the ball colors are each constrained to a random global range. This creates an interesting effect with independently fluctuating colors that almost always seem cohesive.\r\n\r\n\r\n[main-url]: https://levi.dev/metabounce\r\n[codepen-url]: http://codepen.io/levisl176/pen/lqmAE"},{"id":"dancing-spokes","titleShort":"Dancing\nspokes:\nWeb doodle","titleLong":"Dancing spokes: A web doodle","urls":{"demo":"https://levi.dev/dancing-spokes","github":"https://github.com/levilindsey/dancing-spokes","codepen":"http://codepen.io/levisl176/pen/Cktif"},"jobTitle":"","location":"","date":"1/2014","categories":["side-project","web","frontend","art","svg","doodle","tiny","animation","2D","solo-work"],"images":[{"fileName":"screenshot1.png","description":"The color of each spoke is always shifting to another random value. At any given point, the hue, saturation, and lightness components of the spoke colors are each constrained to a random global range. This creates an interesting effect with independently fluctuating colors that almost always seem cohesive."},{"fileName":"screenshot2.png","description":"The color of each spoke is always shifting to another random value. At any given point, the hue, saturation, and lightness components of the spoke colors are each constrained to a random global range. This creates an interesting effect with independently fluctuating colors that almost always seem cohesive."}],"videos":[],"content":"#### Fun with SVG animations and dancing sticks of light!\r\n\r\nThe color of each spoke is always shifting to another random value. At any given point, the hue, saturation, and lightness components of the spoke colors are each constrained to a random global range. This creates an interesting effect with independently fluctuating colors that almost always seem cohesive.\r\n\r\n\r\n[main-url]: https://levi.dev/dancing-spokes"},{"id":"chess","titleShort":"Chess","titleLong":"Chess","urls":{"demo":"https://levi.dev/chess","github":"https://github.com/levilindsey/chess"},"jobTitle":"","location":"","date":"10/2013","categories":["side-project","web","app","frontend","game","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Gameplay screenshot: Directions for the game are shown below the board."},{"fileName":"screenshot2.png","description":"Gameplay screenshot: After clicking on a piece, you must move that piece. Hovering over valid tiles to move the piece to, will cause the tile to turn green. Invalid tiles will turn red. Tiles will turn yellow if the move would take an enemy piece. If there are no valid moves for a piece, it is not selectable in the first place."},{"fileName":"screenshot3.png","description":"Gameplay screenshot: After clicking on a piece, you must move that piece. Hovering over valid tiles to move the piece to, will cause the tile to turn green. Invalid tiles will turn red. Tiles will turn yellow if the move would take an enemy piece. If there are no valid moves for a piece, it is not selectable in the first place."},{"fileName":"screenshot4.png","description":"Gameplay screenshot: After clicking on a piece, you must move that piece. Hovering over valid tiles to move the piece to, will cause the tile to turn green. Invalid tiles will turn red. Tiles will turn yellow if the move would take an enemy piece. If there are no valid moves for a piece, it is not selectable in the first place."},{"fileName":"screenshot5.png","description":"Gameplay screenshot: The current gameplay status is shown above the board."},{"fileName":"screenshot7.png","description":"Gameplay screenshot: The current gameplay status is shown above the board."},{"fileName":"screenshot10.png","description":"Gameplay screenshot: Casteling is supported (before castling)."},{"fileName":"screenshot11.png","description":"Gameplay screenshot: Casteling is supported (after castling)."},{"fileName":"screenshot9.png","description":"Gameplay screenshot: Even the special rule for en passant captures is supported."},{"fileName":"screenshot6.png","description":"Gameplay screenshot: Pawns can be promoted when reaching the far edge of the board."},{"fileName":"screenshot8.png","description":"Gameplay screenshot: Check mate conditions are also calculated."}],"videos":[],"content":"#### A simple, two-player game of chess\r\n\r\nLevi developed this app in a day. It includes a button to generate a random valid play for the current player.\r\n\r\nIt currently support game play across remote machines, but Levi plans on implementing this in the future. It currently does not support an AI player, but Levi also plans on implementing a simple form of this in the future.\r\n\r\n[main-url]: https://levi.dev/chess"},{"id":"squared-away","titleShort":"Tile-matching\npuzzle\ngame","titleLong":"Squared Away: A tile-matching puzzle game","urls":{"demo":"https://levi.dev/squared-away","github":"https://github.com/levilindsey/squared-away"},"jobTitle":"","location":"","date":"9/2013","categories":["side-project","web","app","art","frontend","canvas","animation","2D","game","solo-work"],"images":[{"fileName":"screenshot1.png","description":"Squared Away features a collection of eight different levels—each with progressively more challenging parameters—that guide the player in their exploration of the different gameplay features."},{"fileName":"screenshot11.png","description":"Blocks fall from all four sides. Blocks stack and collapse according to rules that closely resemble the familiar game of Tetris. Upcoming blocks are shown for each side with a cool-down progress indicator."},{"fileName":"screenshot10.png","description":"Completed layers show a cool collapsing block sprite-based animation."},{"fileName":"screenshot9.png","description":"Completed layers show a cool collapsing block sprite-based animation."},{"fileName":"screenshot7.png","description":"The game features many configurable parameters."},{"fileName":"screenshot6.png","description":"The main intended method of interaction is with a mouse or touch gestures. Falling blocks can be slid downward, slid from side to side, rotated, and moved to the next quadrant. As a falling block is being manipulated, phantom lines are shown, which help to indicate where a block can be moved in either the downward or lateral directions."},{"fileName":"screenshot5.png","description":"The game can also be played with keyboard input."},{"fileName":"screenshot8.png","description":"The game features background music from the talented Eric Skiff."}],"videos":[],"content":"#### A tile-matching puzzle game\r\n\r\nThis web app gave Levi the opportunity to hone his web development skills and to learn the latest features of HTML5 and CSS3.\r\n\r\nOn the front end, Levi used pure JavaScript without external libraries like jQuery—with the notable exception of SoundJS for cross-browser support for layering audio. On the server side, Levi used Node.js with ExpressJS.\r\n\r\n## Gameplay\r\n\r\nCore gameplay features:\r\n\r\n- Blocks fall from all four sides\r\n- Blocks stack and collapse according to rules that closely resemble the familiar game of Tetris\r\n- Upcoming blocks are shown for each side with a cooldown progress indicator\r\n- Falling blocks can be manipulated with either the mouse or the keyboard if keyboard mode is enabled\r\n- Falling blocks can be slid downward, slid from side to side, rotated, and moved to the next quadrant\r\n- As a falling block is being manipulated, phantom lines are shown, which help to indicate where a block can be moved in either the downward or lateral directions.\r\n- As more layers of blocks are collapsed, the player advances through levels and gameplay becomes more difficult with faster falling blocks and shorter cooldown times.\r\n- Awesome sound effects and background music.\r\n\r\nAdditional optional gameplay features include:\r\n\r\n- A mode where only complete layers around the entire center square are collapsed.\r\n- A mode where blocks fall from the center outward.\r\n- A special block type that \"settles\" all of the blocks that have landed.\r\n- A special block type that destroys any nearby block that has landed.\r\n\r\n\r\n[main-url]: https://levi.dev/squared-away"},{"id":"newtons-tablet","titleShort":"Intelligent\ntutoring\nsystem:\nTablet","titleLong":"Newtons Tablet: An intelligent tutoring system","urls":{"published":"http://escholarship.org/uc/item/40r3k5v2"},"jobTitle":"Graduate Student Researcher","location":"","date":{"start":"9/2011","end":"7/2013","tieBreaker":3},"categories":["work","school","research","ucr","c-sharp","java","app","tablet","teamwork"],"images":[{"fileName":"boundary-trace-stage-screenshot.png","description":"A screenshot of the program during the boundary-trace stage. At this point, the student is identifying and isolating the individual bodies from complex overall system."},{"fileName":"force-drawing-stage-screenshot.png","description":"A screenshot of the program in during the force-drawing stage."},{"fileName":"error-mode-screenshot.png","description":"A screenshot of the program showing error feedback. During each stage in the problem solving process, the program evaluates the student's work and provides guided feedback for any errors."},{"fileName":"exp-eqn-entry-stage-screenshot.png","description":"A screenshot of the program during the final, equation-entry stage."},{"fileName":"matching-points-from-resampling.png","description":"The Importance of Resampling for Matching Points. In these two examples, the blue points represent the hand-drawn stroke, and the red points represent the boundary of the underlying body. Lines are drawn from a point in one set to the closest point in the other set if the closest point is within the distance threshold. The points with a yellow center represent points that do not have any close matches from the other set. (a) Many of the points from a densely sampled trace stroke will not match any of the points from a sparsely sampled boundary polygon. (b) Resampling the trace and the boundary polygon to the same number of points—40 in this case—greatly increases the likelihood of points having matches."}],"videos":[{"videoHost":"youtube","id":"LAlzil4WsGw","description":"A narrated demonstrational video explaining how to use the Newton's Tablet program."}],"content":"#### An Intelligent Tutoring System running on Window's tablets\r\n\r\nIn his graduate studies at the University of California, Riverside, Levi worked with Professor [Thomas Stahovich][stahovich-url] in the [Smart Tools lab][stl-url].\r\n\r\nLevi led the development of a statics tutorial system, which both helped novice students tackle difficult concepts and provided the lab with key insights into the learning process and how students think.\r\n\r\nThis work resulted in the production of two completely different software programs: one which ran on tablet PC computers with a standard computer interface (_written in C# with WPF_), and one which ran on [Livescribe smartpens][livescribe-url] with specially designed paper worksheets (_written in Java_). Both programs were designed with natural user interface as a paramount concern. These programs were deployed to 150 students in an introductory statics course at the University of California, Riverside in the Winter of 2012.\r\n\r\nYou can read Levi's thesis at [escholarship.org/uc/item/40r3k5v2][thesis-url].\r\n\r\n\r\n[stahovich-url]: http://www.engr.ucr.edu/faculty/me/stahovich.html\r\n[stl-url]: http://smarttools.engr.ucr.edu/\r\n[livescribe-url]: http://livescribe.com/en-us/\r\n[thesis-url]: http://escholarship.org/uc/item/40r3k5v2"},{"id":"newtons-pen","titleShort":"Intelligent\ntutoring\nsystem:\nPen","titleLong":"Newtons Pen: An intelligent tutoring system","urls":{"published":"http://escholarship.org/uc/item/40r3k5v2"},"jobTitle":"Graduate Student Researcher","location":"","date":{"start":"9/2011","end":"7/2013","tieBreaker":2},"categories":["work","school","research","ucr","c-sharp","java","app","livescribe","pen","teamwork"],"images":[{"fileName":"pen-boundary-stage-work-1.png","description":"A free-body-diagram worksheet showing a student's completed work."},{"fileName":"pen-fbd-worksheet.png","description":"A free-body-diagram worksheet showing a student's completed work."},{"fileName":"pen-prob-desc-worksheet.png","description":"A problem-description worksheet showing problem-specific information in the top region and general buttons for using the system in the bottom region."},{"fileName":"pen-and-paper.png","description":"A Livescribe smartpen with Anoto dot paper"}],"videos":[{"videoHost":"youtube","id":"y8QYhgRrRZk","description":"A narrated demonstrational video explaining how to use the Newton's Pen application."}],"content":"#### An Intelligent Tutoring System running on Window's tablets\r\n\r\nIn his graduate studies at the University of California, Riverside, Levi worked with Professor [Thomas Stahovich][stahovich-url] in the [Smart Tools lab][stl-url].\r\n\r\nLevi led the development of a statics tutorial system, which both helped novice students tackle difficult concepts and provided the lab with key insights into the learning process and how students think.\r\n\r\nThis work resulted in the production of two completely different software programs: one which ran on tablet PC computers with a standard computer interface (_written in C# with WPF_), and one which ran on [Livescribe smartpens][livescribe-url] with specially designed paper worksheets (_written in Java_). Both programs were designed with natural user interface as a paramount concern. These programs were deployed to 150 students in an introductory statics course at the University of California, Riverside in the Winter of 2012.\r\n\r\nYou can read Levi's thesis at [escholarship.org/uc/item/40r3k5v2][thesis-url].\r\n\r\n\r\n[stahovich-url]: http://www.engr.ucr.edu/faculty/me/stahovich.html\r\n[stl-url]: http://smarttools.engr.ucr.edu/\r\n[livescribe-url]: http://livescribe.com/en-us/\r\n[thesis-url]: http://escholarship.org/uc/item/40r3k5v2"},{"id":"gesture-recognizer","titleShort":"Gesture\nrecognition","titleLong":"Gesture segmenter and recognizer","urls":{"github":"https://github.com/levilindsey/stroke-recognition"},"jobTitle":"","location":"","date":{"start":"9/2011","end":"7/2013","tieBreaker":1},"categories":["school","research","c-sharp","app","pen","solo-work"],"images":[{"fileName":"canvas-ink-sigma-instance.png","description":"The program with a sigma shape drawn by the user. The system parameters can be seen on the right. The recognizer in this case has been trained on the sample student data with a hold out of first and sixth instances of each shape from each user. The system recognizes the current canvas ink as a sigma shape."},{"fileName":"directional-bitmaps-sigma-instance.png","description":"The directional pixel values for the ink shown in the previous image."},{"fileName":"directional-bitmaps-sigma-template.png","description":"The directional bitmaps for the sigma shape class template. More intense coloration represents higher probabilities."},{"fileName":"directional-bitmaps-1-2-instance-and-template.png","description":"The directional pixel values for a holdout shape overlaid on top of the directional bitmaps for the shape class template to which it was matched."},{"fileName":"recognition-stats-1-2-instance.png","description":"The recognition details for the shape instance shown in the previous image."},{"fileName":"average-results.png","description":"A confusion matrix showing the average results from an 18-fold cross-validation with single-user holdouts."}],"videos":[{"videoHost":"youtube","id":"xxBeSKijSSw","description":"A walkthrough of Levi's stroke segmentation functionality."}],"content":"_Levi developed a novel algorithm for real-time gesture recognition from ink data. This was extended from work done in a UCR course on Pen-Based Computing algorithms and techniques._\r\n\r\n## An Inductive Image-Based Recognizer Using Directional Bitmap Templates\r\n\r\n### Contents\r\n\r\n- The Algorithm\r\n- Strengths\r\n- Weaknesses\r\n- Improvements\r\n- Performance\r\n- Additional Features\r\n\r\n### The Algorithm\r\n\r\nThe general idea of the algorithm is to first create four template bitmaps to represent the ink directional probability of a given shape class, then an unknown shape instance is classified as the class whose directional templates it most closely matches.\r\n\r\n#### Preprocessing\r\n\r\nFirst, angle values are computed for all points in all strokes in a given shape instance. These angles are based off zero degrees along the horizontal axis. The angle value for a point is calculated as the average of the angles of the line segments connecting that point to its previous and next neighbors. These angles are then convoluted with a Gaussian smoothing kernel.\r\n\r\nNext, the given shape instance is uniformly scaled and translated so that its x and y coordinate values range from 0 to 1. It is also translated so that it is centered in this hypothetical, square, 1x1 canvas.\r\n\r\nDirectional pixel values are then computed for the given shape instance. There are four directional components associated with each pixel—the lines along 0°, 45°, 90°, and 135°. The value of a point for each of these four directions is 1 if the angle is different by 0°, 0 if the angle is different by 45° degrees or more, and linearly interpolated between 0 and 1 for angles differing by 0°-45°. Note that these directions also match with their opposites; i.e., if a point has an angle of 215°, then it has a difference of 0° with the 45° line, and its value for its 45° directional pixel is 1.\r\n\r\nThe entire bitmap region is not stored for each shape instance; in order to save space and time, a mapping from pixel indices to directional intensity values is created, and this mapping contains a key for a given pixel if and only if the shape instance contains a point in that pixel. There are actually four such mappings for each shape instance— one for each of the four directions. These mappings are created by looping over each of the points, determining in which pixel a point lies, and storing the four directional values of this point at this pixel index within the four mappings. If multiple points in a shape instance have values for the same direction in the same pixel, then the largest value is saved.\r\n\r\nThe discretization of ink means that we have two special cases to consider: when a single pixel contains multiple consecutive points, and when the line segment between two consecutive points intersects a pixel in which neither point actually lies. The former case is actually handled well with the aforementioned policy of saving a pixel's maximal intensity value for each direction. An alternative approach for this could have been to use the average angle values for consecutive points lying within the same pixel, but this causes a good deal of information to be lost within pixels containing high curvature—i.e., corners. The latter case could be handled robustly by calculating the intermediate pixels via the Bresenham line algorithm, but the lost pixels become less significant with more training examples. Also, a 3x3 Gaussian smoothing kernel is used to smooth the final values of the templates. However, these two points do not address the lost pixels from an unknown shape instance being recognized, and further research could be performed to determine whether the application of the Bresenham line algorithm would increase recognition accuracy.\r\n\r\n#### Training\r\n\r\nAfter each shape instance has been preprocessed, actually creating the templates is a simple process. For each shape class, four complete bitmaps are created—one for each direction—and then all of the pixel intensity values for each of the training shape instances are added together into the appropriate bitmaps. Each pixel in each bitmap is then normalized by the number of training instances for the given shape class template. Finally, a 3x3 Gaussian smoothing kernel is used to smooth the final values of each of the directional bitmaps for each template.\r\n\r\n#### Recognition\r\n\r\nTo recognize a given unknown shape instance, a simple distance metric is used, and the shape is classified as whichever class yields the smallest distance. This distance between a shape instance and a class template is computed as\r\n\r\n![Shape-class distance equation][shape-class-distance-equation-image]\r\n\r\nwhere _I_ is the list of the pixel indices—i.e., keys—in the pixel indices to directional intensity values mappings, _sθi_ is the directional intensity value from the θ directional mapping of the pixel at index _i_ for the unknown shape instance, _tθi_ is the directional intensity value from the θ directional bitmap of the pixel at index _i_ for the shape class template, _ns_ is the number of pixels containing ink for the unknown shape instance, _nt_ is the average number of pixels containing ink for the shape class template, and _w_ is a weight parameter.\r\n\r\nThe term relating to the number of pixels containing ink is important, because this distance metric only considers pixels which are covered by the unknown shape instance. To understand why this is a problem, consider the example of the unknown shape instance being the letter P, and there are templates both for the letter P and the letter R. Because the distance only considers the pixels from the shape instance P, the P and R templates will be found to have roughly the same distance. This term for the number of pixels containing ink allows the distance metric to match the P shape instance more closely to the P template than the R template.\r\n\r\nIt may seem that rather than using this term for the number of pixels containing ink, that the distance metric could simply sum over all of the pixels in the template bitmap rather than only over the pixels covered by the shape instance, but this leads to its own problem. This would mean that whichever class template contains the least ink—in our case '-'—would nearly always be found to have the lowest distance.\r\n\r\n#### Parameters\r\n\r\n_w_ = 0.09\r\ntemplate side length (they are square) = 14\r\nnumber of smoothing iterations for the templates = 3\r\nnumber of smoothing iterations for the point angle values = 1\r\n\r\nThese values have been selected by hand.\r\n\r\n### Performance\r\n\r\nIn order to test this algorithm, a shape collection was compiled from 18 people each drawing 15 shapes 5 times with a few instances being lost due to collection error.\r\n\r\nThen an 18-fold cross-validation was performed with single-user hold outs. The averages from this test are presented in this confusion matrix.\r\n\r\n![Average results][average-results-image]\r\n\r\n_***NOTE: this screen shot should instead say \"18-fold\"**_\r\n\r\n### Strengths\r\n\r\nThe largest strength of this recognition algorithm is that it is extremely fast both to train and to recognize. It took, on average, 0.36 to perform the 18-fold cross-validation, 0.02 seconds to train, and 0.00006 seconds to recognize a shape instance.\r\n\r\nThis algorithm is scale invariant.\r\n\r\n### Weaknesses\r\n\r\nThis algorithm is rotationally variant, so it would not perform well with a system in which rotation mattered.\r\n\r\nThis algorithm does not fully take into account the conditional probabilities of the ink. The templates naturally represent a form of Gaussian probability for ink around a segment of the shape class—i.e., there is a higher probability of the ink in an instance of the shape class lying in the center of the segment of the template than off to either side of the segment. However, given that a point in an instance of the shape class does lie off to one side of a segment of the shape class template, it is much more likely that the next point also will lie off to that side, and much less likely that the next point will lie off to the other side. This algorithm does not take advantage of this conditional probability.\r\n\r\n### Improvements\r\n\r\nThis algorithm could be extended to become rotationally invariant. This could possibly be done by rotating each shape instance according to an indicative angle from the centroid to the furthest point from the centroid.\r\n\r\nThe conditional ink probability—addressed in the weaknesses section—could be taken advantage of with a \"super pixel\" scheme. In this scheme, each pixel in each directional bitmap could contain four additional mxm sub-bitmaps of pixel values. These sub-bitmaps would represent the conditional directional ink probabilities of the neighbors of the given center pixel. Adapting the training of the templates for these bitmaps of super pixels and the distance metric would be a fairly straightforward extension of their current versions. However, this super-pixel scheme would have a much higher time and space complexity.\r\n\r\n### Additional Features\r\n\r\n![Directional bitmaps for the sigma-class template][directional-bitmaps-sigma-template-image]\r\n\r\nThis shows the directional bitmaps for the sigma shape class template. More intense coloration represents higher probabilities.\r\n\r\n![The original ink on the canvas for an instance of the sigma shape][canvas-ink-sigma-instance-image]\r\n\r\nThis shows the program with a sigma shape drawn by the user. The system parameters can be seen on the right. The recognizer in this case has been trained on the sample student data with a hold out of first and sixth instances of each shape from each user. The system recognizes the current canvas ink as a sigma shape.\r\n\r\n![Directional bitmaps showing an instance of the sigma shape][directional-bitmaps-sigma-instance-image]\r\n\r\nThis shows the directional pixel values for the ink shown in the previous image.\r\n\r\n![Directional bitmaps showing data for both an instance and the template of the 1/2 shape][directional-bitmaps-1-2-instance-and-template-image]\r\n\r\nThis shows the directional pixel values for a holdout shape overlaid on top of the directional bitmaps for the shape class template to which it was matched.\r\n\r\n![Recognition statistics for an instance of the 1/2 shape][recognition-stats-1-2-instance-image]\r\n\r\nThis shows the recognition details for the shape instance shown in the previous image.\r\n\r\n\r\n[shape-class-distance-equation-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/shape-class-distance-equation.png\r\n[average-results-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/average-results.png\r\n[directional-bitmaps-sigma-template-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/directional-bitmaps-sigma-template.png\r\n[canvas-ink-sigma-instance-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/canvas-ink-sigma-instance.png\r\n[directional-bitmaps-sigma-instance-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/directional-bitmaps-sigma-instance.png\r\n[directional-bitmaps-1-2-instance-and-template-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/directional-bitmaps-1-2-instance-and-template.png\r\n[recognition-stats-1-2-instance-image]: https://s3-us-west-2.amazonaws.com/levi-portfolio-media/gesture-recognizer/recognition-stats-1-2-instance.png"},{"id":"ucr-ta","titleShort":"Teaching\nassistant","titleLong":"Teaching assistant","urls":{},"jobTitle":"Teaching Assistant","location":"Riverside, CA","date":{"start":"9/2012","end":"6/2013"},"categories":["work","school","teaching","c++","ucr","teamwork"],"images":[],"videos":[],"content":"Levi was a teacher's assistant for Software Construction for two quarters and for Intro to Programming for two quarters.\r\n\r\nBoth of these courses were taught using C++.\r\n\r\n## Un-edited end-of-quarter reviews\r\n\r\n> He is the best TA i have ever seen.\r\n\r\n\r\n> Levi is always available and helpful to his students. He gives good feedback, which helps the overall learning process. He is kind and always motivates his students. Always in a good mood!\r\n\r\n\r\n> Great TA!. Really nice and knows how to talk to people. Explains the problem and lets you solve it for yourself instead of doing it for you. Made my first CS experience easy.\r\n\r\n\r\n> I don't know how you did it, but managing to run around for 3 hours straight answering all our questions was extremely commendable, and you kept a happy face about it all the time. I'd love to have you as a TA in the future! Thanks for everything\r\n\r\n\r\n> Levi was a really great TA. He is very helpful and approachable as well. He was indeed to go-to person for me whenever I needed help. I understood everything that he said clearly.\r\n\r\n\r\n> Very helpful and easy to approach. He truly cares if the students in his lab and others are learning or not. One of my best TA's so far.\r\n\r\n\r\n> Levi has been a great instructor, and is a very approachable person that as a student I do not hesitate to ask him questions.\r\n\r\n\r\n> You were so helpful and such a good teacher! And super sweet. Thank you!\r\n\r\n\r\n> BEST TA EVER :)"},{"id":"jackieandlevi.com","titleShort":"Portfolio\nwebsite:\nLevi's first","titleLong":"Levi's first portfolio website","urls":{"homepage":"http://jackieandlevi.com","github":"https://github.com/levilindsey/jackieandlevi.com"},"jobTitle":"","location":"","date":"2013","categories":["side-project","web","website","app","frontend","back-end","mean-stack","node.js","express","angular","mongodb","gulp.js","animation","solo-work"],"images":[{"fileName":"v1/screenshot1.png","description":"This is the original design for jackieandlevi.com. The home page had a Venn diagram containing links to pages about Jackie, Levi, and both of them."},{"fileName":"v1/screenshot2.png","description":"This was Jackie's about page."},{"fileName":"v1/screenshot4.png","description":"This was Levi's about page."}],"videos":[],"content":"#### The personal web site of Jackie and Levi Lindsey\r\n\r\nThe site hosted at this domain has gone through many iterations.\r\n\r\n- It was originally used as an [invite and RSVP system][wedding-url] for [Jackie][jackie-url] and [Levi's][levi-url] [wedding][wedding-photos-url].\r\n- Later, it was used as a [simple portfolio site][v2-url] for both Levi and Jackie.\r\n- Levi is currently developing a newer version of the site that will involve better designs and delightful interactions.\r\n\r\n\r\n[wedding-url]: http://jackieandlevi.com/wedding/invite\r\n[jackie-url]: http://jackieandlevi.com/jackie\r\n[levi-url]: https://levi.dev\r\n[wedding-photos-url]: http://jackieandlevi.com/wedding/photos\r\n[v2-url]: http://jackieandlevi.com\r\n[v3-url]: http://jackieandlevi.com"},{"id":"wedding-invite","titleShort":"Wedding\nsite","titleLong":"Wedding invite and RSVP system","urls":{"demo":"https://levi.dev/wedding/invite","github":"https://github.com/levilindsey/wedding-invite"},"jobTitle":"","location":"","date":"9/2012","categories":["side-project","web","website","frontend","art","php","jquery","animations","solo-work"],"images":[{"fileName":"screenshot1.png","description":"The landing view."},{"fileName":"screenshot2.png","description":"As the user moves the mouse invard, or touches the envelope, different navigational cards slide outward from under the envelope."},{"fileName":"screenshot4.png","description":"The RSVP card."},{"fileName":"screenshot5.png","description":"A card that tells the time of the wedding and shows a countdown."},{"fileName":"screenshot6.png","description":"A card that names the two people getting married."},{"fileName":"screenshot7.png","description":"A card that describes the location of the celebration."},{"fileName":"screenshot8.png","description":"A card that describes the specifics of the celebration."}],"videos":[],"content":"#### A wedding invite and RSVP system\r\n\r\nLevi developed this simple web app as the invite and RSVP system for his and [Jackie's][jackie-url] wedding.\r\n\r\n[main-url]: https://levi.dev/wedding/invite\r\n[jackie-url]: http://jackieandlevi.com/jackie"},{"id":"voicebox-technologies","titleShort":"Audio\nsignal\nprocessing","titleLong":"Audio signal processing","urls":{"homepage":"http://voicebox.com"},"jobTitle":"Summer Intern (Software Engineer)","location":"Bellevue, WA","date":{"start":"6/2011","end":"9/2011"},"categories":["work","intern","c","c++","teamwork"],"images":[{"fileName":"spectrogram.jpg","description":"A spectrogram with its corresponding waveform."}],"videos":[],"content":"_[VoiceBox Technologies][main-url] is a company focused on conversational speech recognition, search, and information management._\r\n\r\nLevi integrated noise-suppression functionality from an ETSI standard into VBT’s pre-existing [Voice Activity Detection][vad-url] software and tuned it for optimal performance.\r\n\r\n_C with C++ wrappers_\r\n\r\n[main-url]: http://voicebox.com/\r\n[vad-url]: http://en.wikipedia.org/wiki/Voice_activity_detection"},{"id":"phone-wand","titleShort":"Blind-\naccessible\nnavigation","titleLong":"Phone Wand: A blind-accessible navigation application","urls":{"googleCode":"https://code.google.com/p/mobileaccessibility/source/browse/#svn%2Ftrunk%2FPhoneWand"},"jobTitle":"","location":"","date":{"start":"4/2011","end":"6/2011"},"categories":["school","research","uw","android","java","accessibility","mobile","google-maps-api","app","teamwork"],"images":[{"fileName":"fixedinputscreen2.png","description":"This screen presents the current destination. This is essentially a home screen. From here, the user can enter a new destination, find a list of old destinations, find a list of other possible matches for the destination that was typed in the keyboard, or start navigating toward the current destination."},{"fileName":"fixedkeyboard1.png","description":"This screen presents a custom, blind-accessible keyboard. The user can drag a finger over the the buttons to hear them quickly read aloud via TTS."},{"fileName":"fixedroute1.png","description":"While on this screen, the system provides vibrational feedback to guide the user through the route."},{"fileName":"fixeddirections2.png","description":"This screen presents a list of the steps in the current route. The user can drag a finger over the items to hear them read aloud via TTS."},{"fileName":"fixedroutearchive2.png","description":"This screen presents a list of previously entered destinations. The user can drag a finger over the items to hear them read aloud via TTS."},{"fileName":"screen-transition-diagram-small.png","description":"This diagram illustrates all of the different screens of the system and how to navigate from one to another."}],"videos":[{"videoHost":"youtube","id":"qooZe704Ppw","description":"A narrated demonstrational video explaining how to use the Phone Wand application."}],"content":"#### A blind-accessible route-navigation Android application\r\n\r\nIn a computer science capstone course on accessibility at the University of Washington, Levi co-developed a route-orienting application that would guide a blind user via vibrational feedback.\r\n\r\nThis application would first geocode the user's current location and a user-entered destination and then query the Google Directions API for a route from the one to the other. This route was then displayed on a map with the user’s current location. The user could then rotate the phone around, and the phone would vibrate when pointed in the direction of the current step in the user's route.\r\n\r\nThis involved the creation of a completely novel blind-accessible soft keyboard in addition to the implementation of a database for storing previously entered destinations.\r\n\r\nThe PhoneWand code is open source and online at [http://code.google.com/p/mobileaccessibility/source/browse/#svn%2Ftrunk%2FPhoneWand][main-url].\r\n\r\n_The following is the final paper from the Phone Wand project._\r\n\r\n## Abstract\r\n\r\nThe Phone Wand is an Android application for mobile phones that enables a blind user to more easily input and navigate a walking route. At any time a route has been specified, the phone can hint which direction to walk when the user requests assistance using the phone's built-in compass and vibration: (1) the user requests for help by double tapping the \"magic button\" on an orientation screen, (2) the user moves the phone radially to search for the correct direction and (3) the phone vibrates when facing the correct direction to continue on the route. This type of assistance is useful when the user is en route to the next intersection and wishes to verify the current heading along the route, or if the user has reached an intersection and wishes to receive the heading to the next intersection. The application also contains features to enable blind-users to more easily input and manage walking routes. This includes a custom blind-accessible keyboard implementation to more easily input addresses, a \"slide-rule\" based blind-accessible list of previously entered walking routes, and an ability to save your current location as a new future destination. Such application is useful for blind users navigating noisy, dense, urban city centers where hearing is difficult, the application's non-verbal orientation guidance is effective, and where location sensing tools are most accurate.\r\n\r\n## Introduction\r\n\r\nNavigating routes can be difficult or time consuming for people who are blind or have low-vision. Navigation applications that are not blind-accessible make it especially difficult and time consuming for blind users to navigate the software enter route destination. Another problem is that too much text-to-speech can be obstructive to a blind persons most important sense, hearing, taking away focus from actual navigation and listening for hazards. While blind-accessible applications often provide an easy interface to input, they often rely too much on text-to-speech output.\r\n\r\nThe purpose of the Phone Wand is to minimize text-to-speech related to actual route navigation but keep only the text-to-speech necessary for a functioning blind-accessible application. The Phone Wand replaces verbal navigation feedback with vibrational feedback. This is a relatively new type of interaction that uses orientation as input and vibration as output. The concept is simple, (1) the user points the phone 360 degrees around her and (2) the phone vibrates when the user is facing the correct direction. This requires minimal text-to-speech as output, solving our information flooding problem. However, while the heart of our application uses compass and vibration feedback, our application still relies on text-to-speech for blind-accessible text entry, traditional list of directions display and giving application directions.\r\n\r\nThe target group for our application includes blind users and low-vision users who can hear. The compass and vibration feedback portion of our application is theoretically blind-deaf-accessible. But, we assume that our target group can hear text-to-speech in order to hear application directions and use the touch keyboard to set up routes in our application. Theoretically we could make the application completely blind-deaf accessible if we had an option to translate our input and output methods into some standard method familiar for blind-deaf users.\r\n\r\n## Use Case\r\n\r\nHarry is walking to a nearby store on the sidewalk and would like additional help. He pulls out his Android phone and launches the Phone Wand. After patiently inputting the store's destination, and waiting for the phone to compute the route, he can press the \"magic button\" to enable the compass driven vibrational orientation feedback mode. After pointing around for a bit, the phone vibrates in the direction he was heading, indicating that he is en route to the destination. Minutes later he reaches an intersection and is unsure of this next direction of travel. Pressing the \"magic button\" again, he points the phone around. This time the phone vibrates when he points it to his left. This indicates that he should turn left. Harry continues this until he reaches the store, where the phone announces that he has arrived at his destination.\r\n\r\n## Related Work\r\n\r\nThere has been a lot of work previously done for inventing useful methods of route-finding and navigation for blind users. Most of these previous projects are lacking in some regard such as their accuracy, usability, or availability. There are many projects involving the creation of a novel navigation device, such as the EYECane project, which involved the creation of a white cane device with an embedded computer and camera, but we will focus rather on blind-accessible smart phone route-finding/navigation applications.\r\n\r\nMost previous work with smart phone applications involves the use of GPS navigating systems with audio feedback similar to the systems available for driving. The Sendero Group has created a couple of applications that are similar to our project. Their LookAround application has a user interface that is very similar to ours, but it only provides information on the user's current location; it does not provide route information. Their Mobile Geo application both finds routes and provides information about locations; however, this costs $788, and uses primarily audio feedback. The Iwalk application is another navigation application similar in to ours, but once again, this application's primary method of user feedback was with audio.\r\n\r\nOur application also relies on special accessible methods of text entry and item selection. Our slide rule list item selection is based upon an earlier project which allows the user to explore the items by pressing a finger on the item and to select it by double tapping it. Our blind accessible touch keyboard was actually invented from scratch and emulates no prior method of text entry; the key is spoken when a finger presses onto it and is selected when a finger is lifted from it. However, a very similar technique for text entry can be seen in how Apple provides iPhone item exploration and selection with their VoiceOver functionality; with VoiceOver, an item is spoken when pressed or slid over and is selected with a second touch on the screen.\r\n\r\n## Solution\r\n\r\nThe purpose of the Phone Wand application is to provide a blind-accessible interface and orientation scheme for finding and navigating walking routes. Hence the Phone Wand uses the Google Directions API to calculate walking routes using the Android location service including network assisted GPS. Orientation output from the Android sensor service and vibration features are used for orientation feedback. The Android TextToSpeech (TTS) library is heavily used to communicate application screen directions to users in a blind-accessible manner. The Phone Wand automatically saves, previously entered routes and provides the ability to save the current location. The slide rule selection method is used to display these routes for the users and to display a current list of the walking route directions.\r\n\r\n### Destination Input\r\n\r\nThe Phone Wand features a custom blind-accessible keyboard for entering destinations. This keyboard was created to emulate the familiar iPhone keyboard familiar to many blind users. The user uses the keyboard by holding down on the screen and releasing when the desired letter is called. There is functionality for reading out the entered text, backspace, readout of cursor location, and other features. The done button searches for a route best matching the entered text.\r\n\r\nSince blind-accessible text entry is still relatively time consuming for users, the Phone Wand automatically saves previously entered destinations in a list. A user can access this blind-accessible list and select a destination address. The list is based on a Slide Rule interaction technique (cite slide rule paper). A user can scan the list with her finger while the phone speaks back what is currently underneath her finger. If the list is longer than the screen, the user can scan for the \"next\" and \"previous\" buttons in the list to navigate the list. When the user finds the desired item, she can double tap to confirm the item, which the phone will then search for a route.\r\n\r\n### Navigation Features\r\n\r\nAfter a destination has been entered via keyboard or selected from the list of saved addresses, the phone takes the user to a map screen, which displays a route from the current location to the destination. On this screen, there are several options: (1) recompute a new route from the current location, (2) find a nearby address and save it, (3) access a list of directions, or (4) using the compass and vibration feature.\r\n\r\n(1) Recomputing a route is useful when the user is significantly off course from the original route, or otherwise when the user wants to find a fresh route from the current location. The user can accomplish this by swiping upwards. The phone downloads a new route using the Google Directions API.\r\n\r\n(2) Finding a nearby address and saving it is useful when the user wants to \"bookmark\" the current location for future use. This is accomplished by swiping downwards.\r\n\r\n(3) The user can also access a list of directions. The user can swipe right on the map screen to view the list of directions. The list is blind-accessible: it uses the slide rule interaction. Therefore a user can scan through the list and have the phone speak back the directions. This functionality was implemented to give the user another option.\r\n\r\n(4) Finally, the user can take advantage of the compass and vibration feature on this screen as detailed in the next section.\r\n\r\n### Compass and Vibration\r\n\r\nWhile the map screen provides additional features, the compass and vibration mode is the most prominent feature of our application.\r\n\r\nTo activate the compass and vibration mode, the user double taps on the map screen. (This gesture is referred to as the \"magic button\" because it turns this mode on and off.) Afterwards, the user can move the phone radially around her. The phone vibrates when the user is facing the correct direction. The user should double tap on the map screen again to deactivate the mode and continue walking the direction determined by the phone.\r\n\r\nThe intended use of this compass and vibration mode is to check the user's heading at interest points (like intersections) along the route, or whenever the user simply needs to verify her heading. Therefore, the user should activate the mode, check her heading, and then deactivate immediately. While one can activate the mode and leave it enabled throughout the entire route, it may not be accurate enough to direct the user along the route.\r\n\r\nFor effective feedback from the compass, the user should hold the phone parallel to the ground, move the phone radially (point the phone 360 degrees around the person), and move the phone slowly. Since the compass is not entirely accurate, we advise using this feature to \"check\" the heading occasionally along the route, not to depend on it entirely.\r\n\r\n## Future Work\r\n\r\nWe foresee a few extensions and modifications of the Phone Wand that would be valuable blind and low-vision accessible applications.\r\n\r\n### Indoor Navigation using RFID Tags\r\n\r\nWillis and Helal's paper \"RFID information grid for blind navigation and wayfinding\" lays out the foundations needed to use and construct an indoor RFID (Radio Frequency Identification) based indoor navigation system. RFID tags are cheap, extremely power efficient devices with immense portability that can hold a small amount of information. They are extremely useful in pervasive computing by (literally) attaching computer data to physical objects. Indoor navigation is solved by rigging buildings with thousands of RFIDs each containing a small piece of geographical information. The advantage of RFID readers vs. GPS is in the locality of the information that can be encoded in the RFID. The construction is based on \"mature technology\" so the limiting factors for widespread use of the infrastructure technology is simply economical feasibility and adoption. With demonstrated success in large corporations and college campuses the technology could be come widespread and worked into building code standards.\r\n\r\nIn order for Android to take advantage and drive this potentially navigation changing technology, Android would either need to implement an internal RFID reader or provide some simple cost-effective RFID reader attachment. The limiting factor is hardware. The implications of a Blind indoor navigation tool would be extremely valuable for users to navigate complicated indoor areas with little or no GPS capabilities such as airports, public transportation terminals, malls, and stadiums.\r\n\r\n### Indoor Navigation using RFID Tags\r\n\r\nA useful extension to the Phone Wand would be transferring orientation and vibration feedback to a walking cane via Bluetooth. A cane is the most natural and useful tool to aid blind people in everyday walking. This extension would require embedding hardware for compass sensors and vibration directly into a walking cane. There already exists bulky and expensive electronic aided canes on the market that use sonar to detect solid obstacles and puddles within a \"zone of safety\". But user feedback indicates these tend to be too expensive or non-functional for practical use. With the hardware problem solved, a CaneNavigator Android application would allow users to simply enter all route information into the cell phone, then the Bluetooth would activate, and then the cell phone could be placed in the users pocket. All information sensing would then be embedded in the cane and then transferred to the cell phone via Bluetooth. Information would then be processed and vibration signals would transfer from the phone to the cane via Bluetooth.\r\n\r\n\r\n[main-url]: http://code.google.com/p/mobileaccessibility/source/browse/#svn%2Ftrunk%2FPhoneWand"},{"id":"dietary-data-recording-system","titleShort":"Recording\nsystem","titleLong":"Dietary Data Recording System","urls":{"homepage":"http://ee.washington.edu/research/seal","published":"http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5766890&isnumber=5766834&url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D5766890%26isnumber%3D5766834"},"jobTitle":"Student Researcher (Software Engineer)","location":"Seattle, WA","date":{"start":"4/2010","end":"6/2011"},"categories":["work","school","research","android","java","mobile","uw","teamwork"],"images":[{"fileName":"ddrs-app-screenshots.png","description":"A few of the many different screens of the DDRS app."},{"fileName":"meal-review.png","description":"This screen lets the user review all of the different data they recorded for a given meal."},{"fileName":"photo1.jpg","description":"This photo shows the complete device in the process of recording a video of a meal. It is important for the device to rotate around the meal, so that video captures multiple perspectives of the food."},{"fileName":"photo2.jpg","description":"This photo shows the complete device in the process of recording a video of a meal. It is important for the device to rotate around the meal, so that video captures multiple perspectives of the food."},{"fileName":"video-screen.png","description":"This is the screen that prompts the user to record a video of their food."},{"fileName":"starting-screen.png","description":"This is the home screen of the DDRS app."},{"fileName":"device2.png","description":"Levi also created a second app that was used for internal testing of the video-based volume-calculation algorithm."}],"videos":[],"content":"During his undergraduate studies at the University of Washington, Levi spent more than a year working with [Professor Alexander Mamishev][mamishev-url] in the [Sensors Energy and Automation Laboratory][seal-url]. The lab had been hired by the [Fred Hutchinson Cancer Research Center][fred-hutch-url] to create a Dietary Data Recording device that would better record the dietary intake of their study participants.\r\n\r\nThe lab's work revolved around the development of an application that ran on Android devices. Levi was the sole developer for this.\r\n\r\n## Features\r\n\r\nSome of the application's functionalities included:\r\n\r\n- Recording accelerometer and magnetometer data and associating them with individual video frames in order to analyze\r\nthe volume of food\r\n- Firing a laser grid via an external, custom-made, Bluetooth-compatible device\r\n- Recording and playing audio and video\r\n- Scanning barcodes\r\n- Compressing, uploading, and downloading data from the databases on our server\r\n- Recording dietary info in an SQLite database\r\n- Parsing SQL data to and from an XML file\r\n\r\n## Paper\r\n\r\nLevi co-authored a paper on this research entitled [A Pervasive Dietary Data Recording System][paper-url].\r\n\r\n### Abstract\r\n\r\n> The purpose of this research is to determine how beneficial the use of real-time, computer-mediated dietary data recording can be in medical studies. Current methods of dietary tracking and assessment typically involve paper and pencil food diaries and/or periodic dietary interviews, and these tend to be imprecise, yielding inconclusive results as to how a study participant’s diet affects his or her health. Reasons for this imprecision include participant bias, errors in the participant’s memory, errors in the participant’s judgment of food quantities, and underreporting from the participant. The use of computer technology should be able to remedy many of these problems. This project focuses on creating a device based on mobile phones and hand-held computers to record a participant’s dietary intake. This Dietary Data Recording System (DDRS) will serve as a convenient method for real-time documentation of a participant’s intake, and will allow the participant to record both an audio and a visual description of their food. A fundamental aspect of DDRS is the use of a laser-generated grid of distances in coordination with stereo-optic images for the determination of food volume. This device should provide confirmation as to whether real-time computer-assisted recording of dietary data is manageable and accurate.\r\n\r\n[mamishev-url]: http://ee.washington.edu/faculty/mamishev/\r\n[seal-url]: http://ee.washington.edu/research/seal/\r\n[fred-hutch-url]: http://fredhutch.org/en.html\r\n[paper-url]: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5766890&isnumber=5766834&url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D5766890%26isnumber%3D5766834"},{"id":"uw-graphics","titleShort":"OpenGL\nanimations","titleLong":"OpenGL models and animations","urls":{},"jobTitle":"","location":"Seattle, WA","date":"2/2011","categories":["school","uw","opengl","animation","3D","c++","teamwork"],"images":[{"fileName":"screenshot1.png","description":"A screenshot from the program showing a frog playing a ukulele. This view includes textures."},{"fileName":"screenshot2.png","description":"A screenshot from the program showing a frog playing a ukulele. This view includes shading."},{"fileName":"screenshot3.png","description":"A close-up screenshot from the program showing a frog playing a ukulele. This view includes textures."}],"videos":[{"videoHost":"youtube","id":"r8-zo-j-7PU","description":"A video showing an animated frog playing a ukulele."}],"content":"As part of a grahics course at the University of Washington, Levi co-developed a model and animation of a frog playing a ukulele.\r\n\r\n_OpenGL with C++_"},{"id":"rainydayukes.com","titleShort":"Business\nwebsite:\nUkuleles","titleLong":"A hand-made ukulele business","urls":{"homepage":"https://web.archive.org/web/20161020110300/http://www.rainydayukes.com/"},"jobTitle":"","location":"","date":"5/2010","categories":["side-project","web","website","frontend","jquery","php","solo-work"],"images":[{"fileName":"screenshot1.png","description":"The products page."},{"fileName":"screenshot4.png","description":"The home page features a carousel of delightful images."},{"fileName":"screenshot5.png","description":"The home page features a carousel of delightful images."},{"fileName":"screenshot6.png","description":"The home page features a carousel of delightful images."},{"fileName":"screenshot7.png","description":"The home page features a carousel of delightful images."},{"fileName":"screenshot2.png","description":"The listen page includes videos with songs recorded on Rainy Day Ukes ukuleles. These videos feature the supremely talented baritone Drew Dresdner."},{"fileName":"screenshot3.png","description":"A handmade ukulele company isn't complete without its own signature cocktail!"}],"videos":[],"content":"_[Rainy Day Ukes][main-url] was a web-based handmade ukulele business._\r\n\r\nLevi built this website for his friend's business. It was originally hosted at rainydayukes.com, but has since been taken down. However, you can still check out the site on the [WayBack Machine][main-url]. \r\n\r\nAll designs were done by the supremely talented visual designer [Ryan Maher][ryan-url].\r\n\r\n\r\n[main-url]: https://web.archive.org/web/20161020110300/http://www.rainydayukes.com/\r\n[ryan-url]: http://linkedin.com/in/ryanmichaelmaher"}]}