<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.jjmtaylor.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.jjmtaylor.com/" rel="alternate" type="text/html" /><updated>2026-04-10T17:32:46+00:00</updated><id>https://www.jjmtaylor.com/feed.xml</id><title type="html">James JM Taylor</title><subtitle>Because you want a disciplined, innovative worker with a mean streak for pragmatism and a lifelong passion for learning. I take great pride in delivering delightful experiences to users of my mobile applications, and learned Sketch and Lottie to make up for the lack of a designer on my personal projects. I quickly realized though that an app is all for naught without a solid backend to support it. So I used the Kotlin from my native Android work to jumpstart into Spring Boot server development. Now, with full command of the stack I’ve returned to delivering holistic applications for my users.</subtitle><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><entry><title type="html">Vibe Coding</title><link href="https://www.jjmtaylor.com/vibe-coding/" rel="alternate" type="text/html" title="Vibe Coding" /><published>2026-04-01T20:00:00+00:00</published><updated>2026-04-01T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/vibe-coding</id><content type="html" xml:base="https://www.jjmtaylor.com/vibe-coding/"><![CDATA[<p><img src="/assets/vibe.jpg" alt="Vibes" /></p>

<p>Alright, I know I’m pretty late to the party on this one, but I wanted to have have some meat on this post, and so far AI-assisted programming has been anything but that.  Over the last year I’ve used it for minor scripting, pull-request reviews, and other minor tasks.  The hilarious thing is that as I write this post, Copilot auto-complete keeps suggesting whole paragraphs of text rambling on about how AI is a “game-changer” and “revolutionizing the way we code”. Fortunately for us humans, it’s a little more nuanced than that.</p>

<p>First, a little about my setup.  Our work has a corporate Copilot license for use by the various R&amp;D teams.  I’ve configured the Copilot plugin for VS Code, Android Studio, and XCode. When the plugins first came out about a year ago, they were just a client wrapper for the chat terminal that you would normally interact with on the web. Since that initial offering they’ve matured, with “Ask”, “Agent”, and “Plan” modes.  “Ask” is the basically the functionality that they had a year ago.  Ask a question, get an answer.  “Agent” on the other hand has the ability to make changes to your code base, automatically implementing features and fixing bugs. “Plan” bridges the gap between “Ask” and “Agent.”  It uses read-only tools to read your codebase, identify necessary changes, and produce a detailed, ordered set of atomic steps. Unlike “Ask” it accepts a broader context and can refine its actions iteratively.  Unlike “Agent”, “Plan” does not take any actions on its own.  Instead it will write a plan, either as a separate markdown file or within the conversation.  This plan can be  completed later either by you or an agent. Perhaps its biggest shortcoming is that it cannot leverage MCP (Model Context Protocol) integrations.</p>

<p>MCPs are the newest agentic innovation to come out.  They allow the model to access additional context through other applications that implement the MCP protocol.  For example, if you have a calendar application that implements MCP, the model can access your calendar data to schedule meetings or set reminders. The Jira, GitHub, and Figma desktop clients all now offer MCP integrations.  This means that the model can access your Jira tickets, GitHub pull requests, and Figma designs to inform its responses.</p>

<p>I recently leveraged this by integrating the Jira and Figma MCPs into my workflow. I asked the model to review a Jira ticket as well as its associated Figma design.  The model was able to access the Jira ticket to understand the requirements and acceptance criteria, and then accessed the Figma design to understand the visual requirements. Over the course of about half an hour the model implemented a rough draft of the changes necessary. It used our design system to implement the UI elements, and then implemented the necessary logic to make the feature work.</p>

<p>There were quite a few misses though.  The code didn’t account for the need to scroll if a user had a long sharer list.  It also hallucinated text that wasn’t actually present in the Figma design.  It failed to implement the @SerialName annotations that our other DTOs (Data Transfer Objects) had, which would have inevitably caused crashes for users upgrading their app version. Funnily enough it generated other issues that slipped past me, but that the Copilot automatic pull request reviewer did catch.  For thirty minutes of working with the model, I generated about three hours worth of cleanup that I had to do after the fact.</p>

<p>Ultimately, I think the biggest value that I got out of this was the ability to quickly generate a rough draft of the feature.  It was able to take the requirements and design and turn it into code about twice as fast as I might have.  However, the quality of the code was not great, and it required a lot of cleanup to get it to a production-ready state.  Going forward I plan to continue to try and use AI-assisted programming as much as I can, but its a long way off from enabling me to vibe code my way through my day-to-day work.</p>

<p>Photo by <a href="https://unsplash.com/@lukejonesdesign?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Luke Jones</a> on <a href="https://unsplash.com/photos/a-close-up-of-a-computer-circuit-board-tBvF46kmwBw?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="software-engineering" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Combined Arms Programming</title><link href="https://www.jjmtaylor.com/combined-arms-programming/" rel="alternate" type="text/html" title="Combined Arms Programming" /><published>2026-03-01T20:00:00+00:00</published><updated>2026-03-01T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/combined-arms-programming</id><content type="html" xml:base="https://www.jjmtaylor.com/combined-arms-programming/"><![CDATA[<p><img src="/assets/combinedArms.jpg" alt="Combined Arms" /></p>

<p>I recently listened to a podcast discussing the People’s Liberation Army’s recent pivot from a massive conscript force to a much smaller (and yet even more expensive!) professional military force.  The point the podcast made was that China has been quietly observing the US and Russia and has realized that actual military power is no longer measured in the number of soldiers, tanks, or planes that you can muster, but rather on how well they work together.</p>

<p>Russia’s doctrine of “Deep Battle” is operationally sound.  It exploits a tactical breach of the front line and if that proves successful, using deep, mobile, and relentless attacks. The problem for Russia is that such an approach requires a deep integration between the different branches.  The Russian Airborne forces of the VDV need SEAD (Suppression of Enemy Air Defense) aircraft to ensure their safe landing at the airfields they need to seize.  Russian armor in turn needs to penetrate through the front line so that Russian infantry can relieve the VDV, who are only lightly equipped and supplied.  Meanwhile logistics needs to keep up with the advance while rocket forces neutralize enemy strongpoints encountered along the way.</p>

<p>The weakness of Russia’s integration of forces was on full display during the battle of Antonov Airport in February 2022. Russian SEAD failed to suppress Ukrainian air defenses, leading to the loss of several VDV helicopters on the initial assault.  Russian armor was likewise unable to establish a breach in the Ukrainian lines in time, causing the VDV to “wither on the vine” as they exhausted their ammunition against Ukrainian counterattacks. Russian combat arms, rather than cooperating as a combined arms force, instead siloed themselves within their occupational specialties.  This allowed them to be defeated in detail, leading to a defeat of the whole operation.</p>

<p>The more experienced developers amongst you probably immediately see the parallels to software engineering.  Programmers, especially at large companies, are prone to silo themselves within their subject matter domains. Development teams throw bugs “over the wall” to Quality Assurance engineers rather than working with them to achieve a deeper understanding of the issues.  Backend teams will blame frontend teams for misusing the APIs that they’ve developed.  Frontend teams in turn will blame backend teams for slow response times or unexpected responses.</p>

<p>The irony is that just like Russia and China, these problems become more likely as companies get bigger.  I suspect that a functional-based (and siloed) organization, rather than a product-based one, is easier for higher level management to understand.  Rather than having to cope with fifteen unique team cultures and practices, they can grasp onto a web of feature throughlines common across all the products.  Simultaneously, siloed teams offer a lot of obvious efficiencies that can easily translated into accomplishments for annual performance reviews.  Code re-use is maximized since each team writes their code once and then packages it for consumption by other teams. All penny-wise investments that ultimately prove pound foolish as the Russians learned to their woe in 2022.</p>

<p>So, what’s the answer?  The US Military circumvents the natural drift of larger organization towards silos through the adopted Prussian principle of “auftragstaktik”, or “mission-type tactics”. Under auftragstaktik leaders provide subordinates with a mission and commander’s intent, and then allow them the autonomy on how to achieve it.  This is best parralled in the software world through the empowerment of product owners (as opposed to feature owners).  These individuals (usually from marketing) identify and prioritize customer needs for their assigned product.  They are embedded within cross-functional, self-sufficient teams responsible for a single, discreet application.  All the specialties they could need, to include front-end &amp; back-end developers, cybersecurity experts, quality assurance engineers, and UX/UI designers, are included in the team.  Together they work collectivley on the most pressing needs of their customers as identified by the product owner, generating a product users’ actually want rather than maximizing a KPI (Key Performance Indicator) that an executive will see on a slide once and not spare a second thought to.</p>

<p>Admittedly this isn’t as efficient as a functional organization, and for larger organizations can be significantly more expensive as China is discovering in its modernization efforts.  But for building a quality product that accomplishes its assigned mission, the vertical integration of product ownership is an indispensable organizational configuration.</p>

<p>By Photo by Spc. Jensen Guillory - <a rel="nofollow" class="external free" href="https://www.dvidshub.net/image/6432856/m2-bradley-infantry-fighting-vehicles-northeast-syria">https://www.dvidshub.net/image/6432856/m2-bradley-infantry-fighting-vehicles-northeast-syria</a>, Public Domain, <a href="https://commons.wikimedia.org/w/index.php?curid=101900535">Link</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="software-engineering" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Preventive Maintenance Checks and Services (PMCS)</title><link href="https://www.jjmtaylor.com/pmcs/" rel="alternate" type="text/html" title="Preventive Maintenance Checks and Services (PMCS)" /><published>2026-02-01T20:00:00+00:00</published><updated>2026-02-01T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/pmcs</id><content type="html" xml:base="https://www.jjmtaylor.com/pmcs/"><![CDATA[<p><img src="/assets/juke.png" alt="Nissan Juke" /></p>

<p>I generally try to take good care of my car.  In the Army we took vehicular maintenance very seriously and regularly conducted PMCS, or “Preventive Maintenance Checks and Services”. Every unit that I was in would dedicate at least an entire day of the week, every week, to checking POL (Petroleum, Oil, and Lubricants), transmission, tires, and the like.  Called “Motorpool Mondays” some units went as far as to lock their Soldiers in the garage until the end of the duty day to ensure due diligence was done, OSHA regulations be damned.</p>

<p>So, when my Nissan Juke began making odd noises I was mortified.  I had religiously followed the maintenance schedule since purchasing the car in 2013.  When my car was broken into in 2016 and the owner’s manual was stolen along with everything else in my glove compartment, I paid the $98 to buy a new manual from Nissan just so I could continue to record maintenance conducted in the manual appendix.</p>

<p>When I took the car into my mechanic I found out that not only had I worn through to the metal on my front brake pads, but that the brake rotors had been irreparably damaged as well.  All told it was going to be a $540 repair job, two thirds of which could have been avoided if I had replaced my brake pads earlier.  What had gone wrong? Unfortunately for me my mechanic had neglected to inspect the brake pads the last couple visits.  This allowed them to wear down until they damaged the brake rotors.  It was an expensive mistake for me, but one I knew I could avoid going forward by implementing two changes.</p>

<p>The first was to create a <a href="https://docs.google.com/spreadsheets/d/1yZe3x2L9aeYif0fHmpmuJPQGhuSsK1r3Y6OJzHOpdh4/edit?usp=sharing">Google Sheets document</a>. I transcribed my vehicle’s maintenance schedule on the first tab, labelled “Schedule”.  The second tab I labeled “MaintenanceLog” and copied over the service charges recorded on the receipts I had saved over the past eight years.  Once that was done, I created an Apps Script (full script included at the end of the article for reference). Apps Scripts are Google’s version of the Visual Basic Macros you might use with a Microsoft Excel workbook.  My script correlates entries on the “Schedule” tab with those on the “MaintenanceLog” tab and highlight items that are overdue, either in terms of mileage or date of last service. So now the next time the sticker on the inside of my windshield says I’m due for maintenance I can check my excel sheet and see exactly what needs to be worked on.</p>

<p>For the second change I resolved to conduct as much maintenance as feasible on my own.  Truth be told, I had grown complacent.  I’ve always personally done minor maintenance on my car like replacing the headlights, the sparkplugs, and the battery as necessary.  Everything else I had outsourced to my mechanic.  Oil, transmission, transfer case, coolant, you name it. I simply drove my car until my odometer matched the mileage on the sticker and then brought it in to let them do the service. I figured that they had the experience to do the necessary maintenance faster and more thoroughly than I ever could.  Which is why I think it fitting to end with a quote from Robert Pirsig that I may have dismissed out of hand in my first reading of <a href="https://jjmtaylor.com/zen-and-the-art-of-software-maintenance/">Zen and the Art of Motorcycle Maintenance</a>.</p>

<p>“[The mechanics] were like spectators. You had the feeling they had just wandered in there themselves and somebody had handed them a wrench. There was no identification with the job. No saying, ‘I am a mechanic.’ At 5 P.M. or whenever their eight hours were in, you knew they would cut it off and not have another thought about their work … We were all spectators. And it occurred to me there is no manual that deals with the real business of motorcycle maintenance, the most important aspect of all … Caring about what you are doing”</p>

<p>I’m not saying that the mechanics weren’t professional. But because they are professionals and not personally invested, the stakes aren’t as high for them. My car was just one of many worked on that day.  What I’ve realized after all this is that the onus is on me (and other drivers) to care about the vehicles we drive, because we’re the ones that rely on them. We can outsource expertise to a certain extent, but not responsibility. Hopefully with this epiphany we can take a little better care of the things and people that matter to us.</p>

<p>The earlier mentioned Apps Script, as promised:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>function onOpen(e) {
  highlightSchedule();
}

function onEdit(e) {
  highlightSchedule();
}

function highlightSchedule() {
  const ss = SpreadsheetApp.getActiveSpreadsheet();
  const schedule = ss.getSheetByName('Schedule');
  const logSheet = ss.getSheetByName('MaintenanceLog');
  if (!schedule || !logSheet) {
    throw new Error('Sheets named "Schedule" and "MaintenanceLog" must exist in this spreadsheet.');
  }

  // Read Schedule values
  const schedRange = schedule.getDataRange();
  const schedValues = schedRange.getValues();
  if (schedValues.length &lt; 2) return; // no data

  const schedHeader = schedValues[0].map(String);
  const colIdx = {
    item: findHeaderIndex(schedHeader, 'Service Item'),
    months: findHeaderIndex(schedHeader, 'Interval (Months)'),
    miles: findHeaderIndex(schedHeader, 'Interval (Miles)')
  };
  if (colIdx.item === -1 || colIdx.months === -1 || colIdx.miles === -1) {
    throw new Error('Schedule sheet must have headers: Service Item, Interval (Months), Interval (Miles).');
  }

  // Read current reading and current date overrides from anywhere in Schedule
  const contextInfo = extractContextInfoFromSchedule(schedValues);
  const currentReading = contextInfo.currentReading; // number or null
  const today = contextInfo.currentDate || new Date();

  // Build last-service index from MaintenanceLog
  const logRange = logSheet.getDataRange();
  const logValues = logRange.getValues();
  if (logValues.length &lt; 2) return; // no log data

  const logHeader = logValues[0].map(String);
  const logIdx = {
    date: findHeaderIndex(logHeader, 'Date of Service'),
    mileage: findHeaderIndex(logHeader, 'Mileage'),
    work: findHeaderIndex(logHeader, 'Work Performed')
  };
  if (logIdx.date === -1 || logIdx.mileage === -1 || logIdx.work === -1) {
    throw new Error('MaintenanceLog sheet must have headers: Date of Service, Mileage, Work Performed.');
  }

  // Collect list of schedule items to match against log entries
  const scheduleItems = [];
  for (let r = 1; r &lt; schedValues.length; r++) {
    const name = String(schedValues[r][colIdx.item]).trim();
    if (name) scheduleItems.push(name);
  }

  Logger.log('scheduleItems: ' + scheduleItems);

  const lastServiceMap = buildLastServiceIndex(logValues, logIdx, scheduleItems);

  // Clear existing backgrounds for the schedule table region
  const lastRow = schedule.getLastRow();
  const lastCol = schedule.getLastColumn();
  schedule.getRange(2, 1, Math.max(0, lastRow - 1), lastCol).setBackground(null);

  // Apply color rules row by row
  for (let r = 1; r &lt; schedValues.length; r++) {
    const rowVals = schedValues[r];
    const itemName = String(rowVals[colIdx.item]).trim();
    
    if (!itemName) continue; // skip empty rows

    const intervalMonths = toNumber(rowVals[colIdx.months]);
    const intervalMiles = toNumber(rowVals[colIdx.miles]);

    const last = lastServiceMap[normalize(itemName)] || null;

    // If a schedule item has no record in the maintenance log, highlight grey
    if (!last) {
      schedule.getRange(r + 1, 1, 1, lastCol).setBackground('#dddddd');
      continue;
    }

    // Logger.log('itemName: ' + itemName + '; last service date: ' + last.date + ', last service mileage: ' + last.mileage);

    let colorTime = null;
    let colorMiles = null;

    // Time-based rule
    if (last &amp;&amp; last.date &amp;&amp; isFinite(intervalMonths)) {
      const monthsSince = monthsBetween(last.date, today);
      if (monthsSince &gt;= intervalMonths) {
        colorTime = 'red';
      } else if (intervalMonths - monthsSince &lt;= 1) {
        colorTime = 'yellow';
      }
    }

    // Mileage-based rule
    if (last &amp;&amp; isFinite(last.mileage) &amp;&amp; isFinite(intervalMiles) &amp;&amp; isFinite(currentReading)) {
      const milesSince = currentReading - last.mileage;
      if (milesSince &gt;= intervalMiles) {
        colorMiles = 'red';
      } else if (intervalMiles - milesSince &lt;= 1000) {
        colorMiles = 'yellow';
      }
    }

    const finalColor = resolveColor(colorTime, colorMiles);
    if (finalColor) {
      schedule.getRange(r + 1, 1, 1, lastCol).setBackground(finalColor);
    }
  }
}

function resolveColor(colorTime, colorMiles) {
  // Favor red over yellow, yellow over none
  if (colorTime === 'red' || colorMiles === 'red') return 'red';
  if (colorTime === 'yellow' || colorMiles === 'yellow') return 'yellow';
  return null;
}

function findHeaderIndex(headerRow, name) {
  const target = name.toLowerCase();
  for (let i = 0; i &lt; headerRow.length; i++) {
    if (String(headerRow[i]).trim().toLowerCase() === target) return i;
  }
  return -1;
}

function toNumber(v) {
  if (v === null || v === undefined) return NaN;
  if (typeof v === 'number') return v;
  return Number(String(v).replace(/[$,\s]/g, ''));
}

function parseLogDate(v) {
  if (v instanceof Date) return v;
  const s = String(v).trim();
  // Accept YYYY-MM-DD, YYYY/MM/DD, or YYYYMMDD
  const m = s.match(/^(\d{4})[-\/]?(\d{2})[-\/]?(\d{2})$/);
  if (m) {
    const y = Number(m[1]);
    const mon = Number(m[2]);
    const d = Number(m[3]);
    return new Date(y, mon - 1, d);
  }
  // Fallback: try Date parse
  const d2 = new Date(s);
  if (!isNaN(d2.getTime())) return d2;
  return null;
}

function monthsBetween(startDate, endDate) {
  const msPerDay = 24 * 60 * 60 * 1000;
  const days = (endDate.getTime() - startDate.getTime()) / msPerDay;
  return days / 30; // approximate months
}

function normalize(s) {
  return String(s).toLowerCase().replace(/[^a-z0-9\s]/g, ' ').replace(/\s+/g, ' ').trim();
}

function tokenSet(s) {
  return new Set(normalize(s).split(' ').filter(Boolean));
}

function matchesWorkPerformed(scheduleItem, workString) {
  // Token-based match: all tokens of schedule item must appear in work string
  const itemTokens = tokenSet(scheduleItem);
  const workTokens = tokenSet(workString);
  for (const t of itemTokens) {
    if (!workTokens.has(t)) return false;
  }
  return true;
}

function buildLastServiceIndex(logValues, logIdx, scheduleItems) {
  // Produce a map from normalized schedule item name to {date, mileage} for the most recent log entry
  const map = {};
  const normalizedItems = scheduleItems.map((n) =&gt; ({ raw: n, norm: normalize(n) }));

  for (let r = 1; r &lt; logValues.length; r++) {
    const dateVal = parseLogDate(logValues[r][logIdx.date]);
    const mileageVal = toNumber(logValues[r][logIdx.mileage]);
    const workVal = String(logValues[r][logIdx.work] || '').trim();
    if (!workVal) continue;
    for (const it of normalizedItems) {
      if (matchesWorkPerformed(it.raw, workVal)) {
        const prev = map[it.norm];
        if (!prev || (dateVal &amp;&amp; prev.date &amp;&amp; dateVal.getTime() &gt; prev.date.getTime())) {
          map[it.norm] = { date: dateVal || prev?.date || null, mileage: isFinite(mileageVal) ? mileageVal : prev?.mileage || NaN };
        }
      }
    }
  }
  return map;
}

function extractContextInfoFromSchedule(schedValues) {
  let currentReading = null;
  let currentDate = null;

  for (let r = 0; r &lt; schedValues.length; r++) {
    for (let c = 0; c &lt; schedValues[r].length - 1; c++) {
      const label = String(schedValues[r][c]).trim().toLowerCase();
      const val = schedValues[r][c + 1];
      if (label === 'current reading') {
        currentReading = toNumber(val);
      } else if (label === 'date') {
        currentDate = parseLogDate(val);
      }
    }
  }
  Logger.log('currentReading: ' + currentReading);
  Logger.log('currentDate: ' + currentDate);
  return { currentReading, currentDate };
}
</code></pre></div></div>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="life" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Mazamas Leadership Development</title><link href="https://www.jjmtaylor.com/mazamas-ld/" rel="alternate" type="text/html" title="Mazamas Leadership Development" /><published>2026-01-05T20:00:00+00:00</published><updated>2026-01-05T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/mazamas-ld</id><content type="html" xml:base="https://www.jjmtaylor.com/mazamas-ld/"><![CDATA[<p><img src="/assets/ldprogram.jpg" alt="LD Program" /></p>

<p>In the <a href="https://jjmtaylor.com/2025-a-retrospective/">previous post</a> I mentioned posting on all the requirements of the Mazama Leadership Development (LD) program. The Mazamas take climb safety quite seriously, and as a result the LD program for certifying climb leaders is a multi-year commitment.  I find that pretty remarkable considering I’ll probably finish my Computer Science masters degree faster than I will the Mazamas LD program. Despite the time commitment, I want to start the LD program for two reasons.  The first is altuistic and the second somewhat more selfish.  We’ll start with the selfless one first.</p>

<p>The raison d’être of the Mazamas is leading mountain climbs for the broader public.  It follows that the health of the organization is determined by the number of climbs the Mazamas can support at any one time.  The Mazamas have more than enough ropes, pickets, shovels, avalanche beacons, probes, and all the other material requirements for mountain climbing. Similarly, as a non-profit the Mazamas has broad access to the National and State Parks where all manner of mountains are found.  With plenty of equipment and plenty of mountains, the Mazamas are only short on qualified climb leaders. Competition for openings on scheduled climbs is fierce.  By becoming a climb leader myself I can help alleviate that bottleneck.</p>

<p>My second reason, like I said, is a little more self-serving.  I want to be able to climb where I want, when I want.  As a climber I’m at the mercy of what climb leaders choose to post to the schedule. By becoming a climb leader I can take the initiative in putting climbs on the calendar for times that are convenient for me.  If I want to climb Rainier I can post it to the calendar on a date that works for me, my employer, and my family rather than having to move heaven and earth to try and make an arbitrary date work.</p>

<p>Beginning the LD program requires:</p>

<ul>
  <li>A resume of climbing experience and education. Completion of ICS is required.</li>
  <li>A letter explaining why I want to become a climb leader.</li>
  <li>References from three current climb leaders.</li>
</ul>

<p>Obtaining a “C” level provisional leader status requires that I:</p>

<ul>
  <li>Assist with three ICS field sessions as an instructor and obtain evaluations from three different leaders.</li>
  <li>Assist with three Mazama climbs with three different leaders.</li>
  <li>Assist in an Introduction to Alpine Climbing (IAC) snow field session.</li>
  <li>Assist in an IAC rock field session.</li>
  <li>Organize and lead an IAC hike.</li>
  <li>Lead an IAC breakout session.</li>
  <li>Complete crevasse rescue, map and compass, avalanche, accident management, CPR and Mountain First Aid training.</li>
  <li>Complete the annual climb leader update.</li>
</ul>

<p>Luckily, since I already have more than 6 Mazama climbs and 6 non-Mazama climbs I’ll automatically start as an assistant leader.  To acquire full leader status I’ll have to lead three climbs assisted by full climb leaders. Each climb must have a different climb leader, who will submit an evaluation to the climb committee upon completion.</p>

<p>I anticipate it’ll be quite the undertaking, but so is anything worth doing.  I hope to be able to complete two climbs a year in addition to assisting with the ICS and IAC field sessions. That being said, the Mazamas are also pretty patient. If I have to take a temporary break for a particularly rigorous college course, or if I want to pause a bit to help volunteer at my kids’ school, the Mazamas provides the flexibility in the program to be able to do that.</p>

<p>I hope you finished this article with a better understanding of the lengths the Mazamas takes in the pursuit of competent climb leaders.  In any case, please wish me luck!</p>

<p>Photo by <a href="https://unsplash.com/@frankokay?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Frank Okay</a> on <a href="https://unsplash.com/photos/turned-off-vintage-crt-television-on-road-R1J6Z1cnJZc?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="life" /><category term="mazamas" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">A Retrospective on 2026</title><link href="https://www.jjmtaylor.com/a-retrospective-on-2025/" rel="alternate" type="text/html" title="A Retrospective on 2026" /><published>2025-11-08T20:00:00+00:00</published><updated>2025-11-08T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/a-retrospective-on-2025</id><content type="html" xml:base="https://www.jjmtaylor.com/a-retrospective-on-2025/"><![CDATA[<p><img src="/assets/retro.jpg" alt="Retrospective" /></p>

<p>As we come to the end of 2025 I figured it would be a good opportunity to look back on <a href="https://jjmtaylor.com/2025-resolutions/">last year’s resolutions</a>:</p>

<ul>
  <li>Climb three new mountains</li>
  <li>Complete Intermediate Climbing School (ICS)</li>
  <li>Start my masters in Computer Science at OSU</li>
</ul>

<p>I completed ICS in March and started my masters program in September. I had originally planned to climb Mt. Rainier with my classmates, Mt. Adams with my brother, and a third mountain by assisting in the Introduction to Alpine Climbing course. The funny thing is that I didn’t do any of those, but I did climb 12 new mountains over the course of three days by completing the Tatoosh traverse.  I’d like to climb more peaks this year, but given the unpredictability of schedules and weather I don’t think I’ll make summits a New Year’s resolution going forward.</p>

<p>For 2026 I have the following goals:</p>

<ul>
  <li>Read 26 books.</li>
  <li>Complete 4 more classes in my major.</li>
  <li>Start the Mazamas Leadership Development (LD) program.</li>
</ul>

<p>I’ve informally read 24 books for 2024 and 25 books for 2025.  I figure I might as well make it a running new years resolution.  I’ve been keeping track of the books read <a href="https://docs.google.com/spreadsheets/d/1TjrlQqMzMsOfFgIzb5NVem1HLiTY4qbxFV1keE0U9_8/edit?usp=sharing">here</a> if you want to follow along.</p>

<p>I completed Programming Languages for my first course at Oregon State University and will start Computer Architecture in January. The three other courses will be determined by what’s on offer in the OSU course catalogue for those three succeeding quarters.</p>

<p>For the LD Program I plan to pursue certification for leading “C” level climbs.  This would allow me to lead climbs on almost all the mountains that I could want to, to include Mt. Olympus, Mt. Rainier, and Mt. Shasta.  The LD program is actually quite involved, so I plan to make a full write-up in a subsequent post.</p>

<p>Overall I think my resolutions are fairly balanced in terms of ambition and feasibility, and feel relatively confident in my ability to accomplish them.  What about you? What resolutions do you have for next year?  Let us know in the comments.</p>

<p>Photo by <a href="https://unsplash.com/@frankokay?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Frank Okay</a> on <a href="https://unsplash.com/photos/turned-off-vintage-crt-television-on-road-R1J6Z1cnJZc?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="resolutions" /><category term="mazamas" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Passkey Support</title><link href="https://www.jjmtaylor.com/passkey-support/" rel="alternate" type="text/html" title="Passkey Support" /><published>2025-09-08T20:00:00+00:00</published><updated>2025-09-08T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/passkey-support</id><content type="html" xml:base="https://www.jjmtaylor.com/passkey-support/"><![CDATA[<p><img src="/assets/key.jpg" alt="Key" /></p>

<p>I recently had to integrate Google’s passkey APIs into an Android app that I support, so I figured I’d write up a brief summary of my findings.  This article is not intended to replace the <a href="https://developer.android.com/identity/sign-in/credential-manager">official documentation</a> and <a href="https://developer.android.com/courses/pathways/passkeys">Google passkey course</a>.  Instead, it’s supplemental reading covering my own lessons learned during the implementation.</p>

<p>First, why use passkeys? Passkeys are considered better than traditional MFA (Multi-Factor Authentication) because they eliminate the need for passwords altogether, making them significantly more resistant to phishing attacks and data breaches.  They also provide a high level of security by leveraging device-based cryptographic keys and biometric authentication.  This gives passkeys the benefits of MFA with a single, seamless step all without the extra login friction involved in traditional MFA methods. For example, entering username &amp; password, then receiving and inputing a code from an SMS message, then answering a security question, and so on.  Passkey authentication basically achieves MFA in a single step.</p>

<p>Passkeys are created on a device-by-device basis.  They use public key cryptography and Public Key Infrastructure (PKI).  The public certificate is stored with applications that you need for authentication, while the private key is stored by Google Password Manager. On Android OS 14 and above the specific API is called Google Credential Manager and it supports 3rd party managers like 1Password, Okta and Apple Keychain.  This has the added benefit that it protects applications from data breaches because there is no password database that can be cracked in the event of a breach of the applications’ backend servers.</p>

<p>In order to integrate a website with an app you need to update the assetLinks.json file on your website based on the json for your app.  This can be done in the Google Play Console under <code class="language-plaintext highlighter-rouge">Test And Release</code> &gt; <code class="language-plaintext highlighter-rouge">Setup</code> &gt; <code class="language-plaintext highlighter-rouge">App Signing</code>.  Once the assetLinks.json is updated you also need to register the website domain in your application’s AndroidManifest.xml file.</p>

<p>When you try to register a passkey using the Google CredentialManager APIs you need to make sure your backend responds with the appropriate challenge/response JSON listed in the documentation.  The credential manager then saves the private key and uploads the public key to the server, which your backend team (or you, if you’re a full-stack developer) configures to persist that key.</p>

<p><img src="/assets/passkey-diagram.png" alt="Passkey Diagram" /></p>

<p>Once all those steps are complete your application can now invoke the credential manager API to prompt the user for biometric verification.  Once the verification is provided the app can retrieve the credential from the device credential storage and transmit the signature to your application’s backend to verify against the public key stored there.  If verification is successful your backend can then confidently issue a JWT (Javascript Web Token) authenticating the user and authorizing them to perform any actions granted to that particular user.</p>

<p>I learned a lot in the process of building this particular feature out for Dexcom and I hope that by reading this article you vicariously learned a bit about passkeys as well!</p>

<p>Photo by <a href="https://unsplash.com/@justmejuliee?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Julia Taubitz</a> on <a href="https://unsplash.com/photos/mural-of-a-girl-with-a-key-and-bird-_IheHAQqiZ0?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="android" /><category term="software-engineering" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Dependency Substitution</title><link href="https://www.jjmtaylor.com/dependency-substitution/" rel="alternate" type="text/html" title="Dependency Substitution" /><published>2025-08-08T20:00:00+00:00</published><updated>2025-08-08T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/dependency-substitution</id><content type="html" xml:base="https://www.jjmtaylor.com/dependency-substitution/"><![CDATA[<p><img src="/assets/elephants.jpg" alt="Gradle Elephants" /></p>

<p>I’ve had to do a fair amount of in-house SDK work over the last month, so I figured this month would be a good opportunity to cover dependency substitution.  When developing your own SDK its generally a good idea to periodically smoke test it within the context of the consuming client app.  Normally dependencies like SDKs are downloaded from a remote artifact repository like Maven using Gradle.  But when you’re developing the SDK it can be a real pain in the butt to generate a release binary, upload it to a remote artifact repository, just to turn around and download it again for your local sample client app.  Fortunately the Gradle framework has a couple of different options to sidestep this whole process, significantly increasing your iteration speed.</p>

<p>The first approach is <a href="https://docs.gradle.org/current/javadoc/org/gradle/api/publish/maven/tasks/PublishToMavenLocal.html">publishToMavenLocal</a>. This has been around since Gradle 1.4 and is ideal for testing a library in isolation or as a dependency for a local project prior to release.  <a href="https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/DependencySubstitution.html">DependencySubstitution</a> on the other hand was introduced in Gradle 2.5 and is best used for developing and testing inter-dependent modules within a single, larger, composite build project.  It has the advantage of allowing for seamless switching between the source code of the app and the SDK.</p>

<p>This post is specifically about dependency substitution, which I’ve tended to favor because of the flexibility granted by compiling both source sets simultaneously. It allows me to treate both projects as a single app within Android Studio, making refactoring a comparative breeze.</p>

<p>You can enable dependency substitution locally by following the steps below:</p>

<ol>
  <li>Git clone the dependency repository as a sibling of your app’s repository.  For example, if you wanted to build Coil (an Android image loading library) locally, you’d clone the project from https://github.com/coil-kt/coil.</li>
  <li>In your <code class="language-plaintext highlighter-rouge">settings.gradle.kts</code> make sure to use <code class="language-plaintext highlighter-rouge">includeBuild</code> with a path to your local libary.</li>
  <li>Within the includeBuild lambda create a <code class="language-plaintext highlighter-rouge">dependencySubstitution</code> lambda with a list of the specific dependencies you want substituted.  A full example is below:
    <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>includeBuild("../coil") {
 dependencySubstitution {
     substitute(module("io.coil-kt.coil3:coil-compose")).using(project(":coil-compose"))
 }
}
</code></pre></div>    </div>
  </li>
  <li>If you would like to make the substitution optional, add a flag to your <code class="language-plaintext highlighter-rouge">settings.gradle</code>, i.e. <code class="language-plaintext highlighter-rouge">if (file("../coil/.composite-include").exists()) {</code> followed by your <code class="language-plaintext highlighter-rouge">includeBuild</code> lambda.</li>
  <li>If you followed step 4 above you would create an empty file named <code class="language-plaintext highlighter-rouge">.composite-include</code> in the root directory of the project to be included.</li>
  <li>Build, Run and Debug your app as usual</li>
  <li>Coil will now show up in Android Studio’s Project Explorer and it will be possible to set breakpoints in those libraries.</li>
  <li>If you encounter compile errors due to missing or extra parameters, make sure that your local version of the library matches as closely as possible to the version your app is using was using. This can be done by checking the version <code class="language-plaintext highlighter-rouge">build.gradle.kts</code> file in the App’s <code class="language-plaintext highlighter-rouge">/app</code> directory.</li>
  <li>If you get a no matching variant of project error, its for one of two reasons:
    <ul>
      <li>A mismatch in the gradle plugin version between your app and the dependency. One or more will need to be updated to match the others.</li>
      <li>A mismatch in the gradle distributionUrl (in gradle-wrapper.properties) between your app and the dependency. One or more will need to be updated to match the others.</li>
    </ul>
  </li>
  <li>If you get a duplicate class error during compilation, either of these things is likely happening:
    <ul>
      <li>Your local copy of the dependency has a different transient dependency version than your app is using.  You need to change one to match the other.</li>
      <li>Your local copy of the dependency undergoing some processing that the remote version did not, or vice versa.  Minification with R8 is a good example.</li>
    </ul>
  </li>
  <li>If you followed step 4, to stop pulling Coil into your app build, delete the <code class="language-plaintext highlighter-rouge">.composite-include</code> file from Coil’s repo</li>
  <li>Make sure to do a gradle sync after creating/deleting <code class="language-plaintext highlighter-rouge">.composite-include</code> in the Coil repo.</li>
</ol>

<p>I hope you found this helpful!  Gradle dependency substitution can be a little tricky to setup at first, but once you have it configured I think you’ll find it indispensable for local libary development.</p>

<p>Photo by <a href="https://unsplash.com/@photosbybeks?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Photos By Beks</a> on <a href="https://unsplash.com/photos/grey-elephant-on-green-grass-field-during-daytime-QzCfMi1Mbjs?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="gradle" /><category term="android" /><category term="software-engineering" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Acute Mountain Sickness</title><link href="https://www.jjmtaylor.com/ams/" rel="alternate" type="text/html" title="Acute Mountain Sickness" /><published>2025-07-05T20:00:00+00:00</published><updated>2025-07-05T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/ams</id><content type="html" xml:base="https://www.jjmtaylor.com/ams/"><![CDATA[<p><img src="/assets/shasta.jpg" alt="AMS" /></p>

<p>In my last post I briefy mentioned my failed Mt. Shasta attempt.  Today I will go into greater detail about what happened and what precautions I’ll take going forward.</p>

<p>At 14,162 feet tall, Mt. Shasta is the second highest mountain in the <a href="https://www.peakbagger.com/list.aspx?lid=5063">Mazamas 16 Northwest Peaks challenge</a>. It’s a nearly six hour drive from Portland, making it the most distant of the sixteen peaks in terms of travel, with Mt Shuksan a close second at five and half hours and Mt Olympus at a paltry five hours. We took the Hotlum Bolam Glacier route, the easiest route on the north side of the peak.  It proved to be a much less crowded alternative to the south facing Avalanche Gulch route.</p>

<p>I had spent the week prior in Yellowstone with my family, sleeping at an average elevation of 6,500 feet.  After making the drive down to the North Gate trailead, we established basecamp at 6,800 feet and spent the night.  We set off at 0900 the next morning and had established high camp at 10,000 feet around 1300 that same day.  The plan was to make a late alpine start for the summit at 0400 the next morning. I figured this would provide ample time to acclimitize for the final push. I went to bed early at 1800 to get as much sleep as possible.</p>

<p>Two hours later I woke up with a headache and severe nausea.  I vomited my half-digested chicken fried rice dinner into the same bag that I had used to rehydrate it.  Despite my best efforts to lay still and to go back to sleep, my heart rate continued to race at nearly a hundred beats per minute.</p>

<p>I drifted off into a fitful sleep before my alarm woke me up at 0300.  I was able to eat some oatmeal and to keep it down, but ultimately decided to stay at high camp in order to avoid threatening the likelihood of summiting for the rest of the group.  After a successful summit, the team returned to camp at 1400 and we began the long trip back to Portland.</p>

<p>Although I didn’t summit that day, I did learn some important lessons that should allow me to summit on the next attempt.  The first (and biggest) one was that acetazolamide, like all drugs, loses its efficacy over time.  The pills that I brought with me were nearly three years old and well past their expiration date. Not only was the medicine’s overall effect muted, the prophlactic increase in respiration rate was far less than what it would have otherwise been with a fresh prescription. The duration of the respiration rate increase was also substantially reduced.  I’ve found that the effects of new acetazolamide can last 24-36 hours after each dose.  Comparatively, the effects of my expired medication lasted less than 12 hours after initial ingestion. These limitations meant that I wasn’t able to aclimitize as rapidly as needed.</p>

<p>My second lesson learned was that I should be more measured in my response AMS, and rely on AO4 (Alert and Oriented x 4) as an objective means to assess the impacts of altitude.</p>

<p><img src="/assets/ao4.png" alt="AO4" /></p>

<p>From what I was told after they returned, nearly the entire team suffered from nausea and headaches by the time that they reached the summit. Luckily they remained lucid and coordinated, completing the summit and returning safely.  Given the widespread nature of these symptoms, bringing ibroprofun to treat the swelling from AMS headaches and emetrol for the nausea could go a long way for addressing immediate discomfort. Both medications are non-drowsy, over-the-counter solutions that might help others actually enjoy their brief stay at the summit before descending.  This isn’t to say that they should be relied on for long periods of time, as they could easily conceal the development of more serious conditions like HACE (High Altitude Cerebral Edema) and HAPE (High Altitude Pulmonary Edema) if used to combat symptoms for more than 24 hours.</p>

<p>The third lesson learned was the negative effect that caffeine and alcohol can have towards acclimitization.  While I personally abstained from both substances well in advance of the climb, there was one other climber who stayed in the camp with me on summit day due to AMS.  They said that while they don’t normally get AMS, they had consumed copious amounts of coffee on the drive down from Portland.  Caffeine actively constricts the blood vessels in your head and can worsen the headaches and nausea caused by AMS.  Alcohol for its part reduces red blood cell production.  Red blood cells are responsible for transporting oxygen throughout the body. Part of the acclimitization process known as erythrocytosis involves bone marrow increasing red blood cell production to allow oxygen to be more efficiently delivered to your body.  Alcohol hinders erythrocytosis and slows acclimitization.</p>

<p>To summarize, my major lessons were to only use fresh acetezolamide, treat physiological symptoms with pallative mediction while monitoring overall mental alertness, and to avoid caffeine and alcohol before and during the climb. I hope these lessons will allow me to eventually complete a summit of Mount Shasta. I’ll be excited to give it another shot.</p>

<p>Photo by <a href="https://unsplash.com/@sepoys?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Rohit Tandon</a> on <a href="https://unsplash.com/photos/aerial-photography-of-mountain-range-covered-with-snow-under-white-and-blue-sky-at-daytime-9wg5jCEPBsw?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="life" /><category term="mazamas" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Picking college courses</title><link href="https://www.jjmtaylor.com/college-courses/" rel="alternate" type="text/html" title="Picking college courses" /><published>2025-06-04T20:00:00+00:00</published><updated>2025-06-04T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/college-courses</id><content type="html" xml:base="https://www.jjmtaylor.com/college-courses/"><![CDATA[<p><img src="/assets/books.jpg" alt="Books" /></p>

<p>Good news and bad news this post.  First the bad news.  I had the privelage of being asked by one of my former climb leaders to join them for a summit attempt of Mt. Shasta. The team was able to summit but I was not. I got altitude sickness at high camp and was not able to join them for the summit push.  The lessons learned will be the subject of next month’s post.</p>

<p>Now the good news; I was accepted to and <a href="https://jjmtaylor.com/postgraduate-education/">start my postgraduate education in the Fall</a>! On September 18th Oregon State University’s Online Master’s in Computer Science program will begin its fall quarter. My current employer, Dexcom, also approved my tuition assistance request for $5,250 this year, so my education will be that much cheaper to undertake! The OSU MSCS program will ultimately require me to complete 45 credit hours, of which I will be taking 8 credit hours in my first quarter. That’s a heavier course load than I had originally planned on, but my academic advisor feels confident that I should be able to manage.  It will be a little odd going back to school after a 13 year hiatus. Hopefully once I start working through the coursework it will become second nature once again.</p>

<p>I signed up for two core courses, Programming Languages and Algorithms.  I wasn’t particularly good in either during my undergraduate work, so my plan is to throw myself into the deep end and get the hard stuff out of the way first.  After that’s done I can take the courses that I found more enjoyable like computer graphics, artificial intelligence, and robotics at my leisure. Overall, I’m really excited to begin my masters.</p>

<p>What about you? If you’re undergoing a continuing education course let us know in the comments!</p>

<p>Photo by <a href="https://unsplash.com/@tomhermans?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Tom Hermans</a> on <a href="https://unsplash.com/photos/book-lot-on-table-9BoqXzEeQqM?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="life" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Rooting an Android emulator</title><link href="https://www.jjmtaylor.com/rooting-android/" rel="alternate" type="text/html" title="Rooting an Android emulator" /><published>2025-05-06T20:00:00+00:00</published><updated>2025-05-06T20:00:00+00:00</updated><id>https://www.jjmtaylor.com/rooting-android</id><content type="html" xml:base="https://www.jjmtaylor.com/rooting-android/"><![CDATA[<p><img src="/assets/rooted.jpg" alt="Rooted" /></p>

<p>I recently had to verify that a piece of security software correctly identified an android device as rooted.  This of course raises the question, “how do you configure a device to be “rooted”?” The instructions and software vary by device, but if you’re just trying to confirm the detection of a device’s rooted status, the easiest method is to root an Android Studio emulator.  This is because there are no manufacturer-specific safeguards to prevent you from rooting the emulator and you don’t have to worry about irrevocably corrupting actual hardware during the rooting process.  So without further preamble, the instructions!</p>

<p>In order to root your emulator, you’ll need to download the following softare:</p>

<ul>
  <li><a href="https://magiskmanager.com/downloading-magisk-manager/">Magisk APK</a></li>
  <li><a href="https://github.com/newbit1/rootAVD/archive/refs/heads/master.zip">rootAVD</a></li>
</ul>

<p>You will also need to install ADB, which can be done in Android Studio directly by following the steps below:</p>

<ol>
  <li>Open Android Studio</li>
  <li>Click on “Tools” on the top bar</li>
  <li>Select SDK Manager</li>
  <li>In the Center Box, select the “SDK Tools” tab</li>
  <li>Scroll down to “Android SDK Platform-Tools”</li>
  <li>If already installed, skip to next step</li>
  <li>If no checkmark / not already installed, checkmark the box and hit “apply”.</li>
  <li>Complete the process to install SDK Platform-Tools</li>
</ol>

<p><img src="/assets/root0.png" alt="Step 0" /></p>

<p>Next, you’ll need to add “emulator” and “adb to your PATH variable:</p>

<ol>
  <li>Open a Finder window</li>
  <li>Press Command + Shift + G then press Enter to be navigated to the root folder of your mac</li>
  <li>Open “Users” directory</li>
  <li>Open your User directory (example: ab1234)</li>
  <li>Press Command + Shift + Period (“.”) to show hidden files</li>
  <li>Open .zshrc in a text editor (double click)</li>
  <li>Copy/paste the following lines to this file:</li>
  <li>export PATH=$PATH:/Users/<USERNAME>/Library/Android/sdk/platform-tools/   /*adds adb to path*/</USERNAME></li>
  <li>export PATH=$PATH:/Users/<USERNAME>/Library/Android/sdk/emulator/         /*adds emulator CLI to path*/</USERNAME></li>
</ol>

<p><img src="/assets/root1.png" alt="Step 1" /></p>

<p>For the remainder of this tutorial, you will want to open 3 Terminal instances which will be used for the following:</p>

<p>A: Emulator
B: Assorted Commands to verify the next step(s) are ready
C. rootAVD</p>

<p>If you haven’t already, download Magisk and rootAVD as seen above. Unzip the rootAVD .zip file. Relocate the folder to a more permanent location if desired. From Terminal A or B, run the command “emulator -list-avds”. You should see the names of all the emulators you’ve created in Android Studio:</p>

<p><img src="/assets/root2.png" alt="Step 2" /></p>

<p>From Terminal A, run the command “emulator -avd <Name of="" Emulator="" you="" wish="" to="" root="">" to start the emulator.</Name></p>

<p><img src="/assets/root3.png" alt="Step 3" /></p>

<p>After the Emulator has started, drag-and-drop the Magisk apk from your download folder to the emulator. Open Magisk from your apps list and tap “install” on the App tab (NOT the Magisk tab)</p>

<p><img src="/assets/root4.png" alt="Step 4" /></p>

<p>You may be prompted to authorize unknown sources in order to install. Click yes. Swipe back. You may be prompted to install an update. Confirm that you are not root by looking at the bottom of the Magisk screen. There should be a “Superuser” button that is greyed out, indicating you do not have root access.</p>

<p><img src="/assets/root5.png" alt="Step 5" /></p>

<p>In Terminal B, confirm that adb is in Path and can connect to your running emulator with the command “adb devices”. A “List of devices attached” should be displayed, with only one result of “emulator-XXXX”, where XXXX is a 4-digit port code. Ex: “emulator-5554”</p>

<p><img src="/assets/root6.png" alt="Step 6" /></p>

<p>In Terminal C, navigate to the folder where you saved the contents of rootAVD.zip. Still in Terminal C, confirm that rootAVD runs and detects your emulators by executing “./rootAVD.sh ListAllAVDs”. You should see a list of lines describing your emulators and the commands you would run to root them (which you can copy/paste).</p>

<p><img src="/assets/root7.png" alt="Step 7" /></p>

<p>Example: <code class="language-plaintext highlighter-rouge">./rootAVD.sh ~/Library/Android/sdk/system-images/android-33/google_apis/arm64-v8a/ramdisk.img</code>
Refers to an emulator with the following configuration:</p>

<ul>
  <li>android-33: The API / SDK version installed on this emulator</li>
  <li>google_apis_playstore / google_apis: Source used to install your API / SDK. Useful for helping you differentiate one emulator from another.</li>
  <li>arm64-v8a: The type of processor this emulator uses, which can help you match rootADV devices to Android Studio devices. Pixels use arm64.</li>
</ul>

<p>To summarize the example above, a Pixel 5 emulator is running API 33 downloaded from Google APIs.</p>

<p>Find the corresponding candidate command for the (now currently running in Terminal A) emulator you wish to root (In yellow, previous screenshot). Using the emulator from the example above, you would execute:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./rootAVD.sh ~/Library/Android/sdk/system-images/android-33/google_apis/arm64-v8a/ramdisk.img
</code></pre></div></div>

<p>Copy/Paste the command above, edit it for the emulator of your choice, and then hit enter.</p>

<p><img src="/assets/root8.png" alt="Step 8" /></p>

<p>The emulator should shut down. It may first pop-up a window asking if you wish to save state for next boot. Select No.</p>

<p><img src="/assets/root9.png" alt="Step 9" /></p>

<p>Open the Terminal where the emulator was running. Restart the emulator by running the previous command (simply press “up” then “enter” on the keyboard)</p>

<ul>
  <li>If the device successfully restarts, fantastic!</li>
  <li>If the device black-screens, wait a minute. It sometimes take a bit before it boots.
Another possibility is that the emulator is active but the device is “off”. Press the power button at the top of the emulator’s right-side menu.</li>
</ul>

<p>Once the device has restarted, open Magisk. If the “Superuser” tab is no longer greyed out, you have successfully rooted your device!</p>

<p><img src="/assets/root10.png" alt="Step 10" /></p>

<p>Photo by <a href="https://unsplash.com/@deedeedss?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">DeeDee Wang</a> on <a href="https://unsplash.com/photos/grayscale-photo-of-tree-roots-3Ck1ppnf-6c?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;/assets/avatar.jpg&quot;, &quot;bio&quot;=&gt;&quot;Hi! I&apos;m James, a dedicated polyglot mobile developer based in Portland, OR.&quot;, &quot;links&quot;=&gt;nil}</name></author><category term="android" /><summary type="html"><![CDATA[]]></summary></entry></feed>