I have a tendency to start a lot of projects. I don’t necessarily finish them all — some software is never truly finished, sometimes they become deprecated, and sometimes, I just end up bored, or simply forget about them.

As a contractor/potential employee, I find it useful to redirect people to my GitHub page, where my projects are most prominently displayed and easily accessible, and they can get a quick glance at what I like to do and how my code and conduct look.

Since I have many rotating projects, it’s becoming quite difficult to keep track of which projects were recently updated, added, or removed. Not to mention, 6 pins on the GitHub profile really isn’t that much.

In this article, I will try to automate this process just a bit. This blog seems like the perfect place to make things a bit more organized but also make sure they can be updated easily. It gives me more control than GitHub, and I already have a projects page here, which would be a great target to replace and put something a bit less static in.

Hopefully, this article will give you some nice ideas on how you can spruce up your website with some semi-dynamic content, or how you can also help yourself display your featured work with a lot less manual labor.

The goal

This blog is an Astro project hosted as a static website. I truly recommend Astro; it’s an incredibly easy, fun, and flexible way to create a website or a blog that isn’t meant to be some kind of large web application — but I digress.

What I’m trying to accomplish here is essentially the following:

  • Get all my GitHub repos
  • Filter out uninteresting ones and prominently display the best ones
  • List others by type/category/language/some other grouping

Now, since this is a static website generator that I am using, there are a few limitations:

  • There is no “server side” during runtime — any processing is being done during build
  • I want the system I am creating to play nicely with the existing code
  • I want it to be easy to update, manage, add overrides to, and hide certain repositories

The plan

As of writing this, I haven’t yet changed a single character of code. I first want to get a glimpse at what I currently have and try to see if I can come up with a plan for how to get through this.

I already have a projects page; it’s simply an index with a list of some projects. Each project links to its own page, which might or might not have more information. This isn’t ideal, but I can already see a way to improve here — GitHub holds a README, a project description, releases, and links. It would be nice to implement this information somehow into the page.

In Astro, pages are created inside the src/pages/ directory of a project. Content that can be generated for those pages using simpler formats, such as Markdown and MDX, is available inside the src/content directory. With that in mind, let’s take a peek at the project-related files:

src/
├── components
│   ├── ProjectLinks.astro
├── content
│   ├── config.ts
│   └── project
│       ├── dart-script-runner.md
│       ├── dungeon-paper.md
│       ├── flutter-wheel-spinner.md
│       ├── mudblock.md
│       ├── simple-scaffold.md
│       └── unaconfig.md
└── pages
    └── projects
        ├── [...slug].astro
        └── index.astro

As you can see, there are a few files that are relevant to projects.

  • A component that displays a project
  • An index page
  • A page for each project ([...slug].astro)
  • Contents for each project (the .md files)

So, for each repository I need to:

  • Generate a .md file
  • Update the index page to display them with new information & sorting

This will get me most of the way there. After that, it will be mostly connecting some pipes to make sure it’s easy to reproduce this from, say, a CI environment, for easy deployment.

Getting the data

Well, I can’t display GitHub projects without getting their information, right?

For this, I’ll need a GitHub token that works. I’ve already got one set up — perfect.

Right now, the top of the src/pages/[...slug].astro file looks like this:

export async function getStaticPaths() {
  const posts = await getCollection('project')
  return posts.map((post) => ({
    params: { slug: post.slug },
    props: post,
  }))
}
type Props = CollectionEntry<'project'>

const post = Astro.props
const { description, links } = post.data
const { Content } = await post.render()

The function getStaticPaths is a special function in Astro that is responsible for telling Astro which pages to generate, and it is using the content directory to do so. I am passing slug as a param to the page generator, which means it can be accessed in the name’s placeholder — that’s what the file is called [...slug].astro. [] denotes param tokens to be replaced, and the ... means they can have a directory path inside.

The top of the src/pages/index.astro file looks like this:

const projects = await getCollection('project')
projects.sort((a, b) => {
  return a.data.order - b.data.order
})

Which is where we load all the projects for the index page. Note that they don’t necessarily have to correlate; I can have pages without links routing to them or projects without pages to link to. This is not our case, but it’s good to know.

Practically anything can be loaded here — which is where I intend to invoke the generator of new pages for the GitHub projects. So, let’s work on getting that to work.

Using GitHub API to grab repositories

A quick search and we can easily find the GitHub API specification for repositories for a user.

This is where I need the API key.

Let’s create a new TypeScript file and put whatever logic we need there in its own box.

I’ll start simple and work up from there. Let’s make a request to the repos API, and for each repo also grab the README file.

// src/github-projects.ts
import { GITHUB_TOKEN } from './consts'

const GITHUB_USERNAME = 'chenasraf'
const headers = new Headers()
headers.set('Authorization', `Bearer ${GITHUB_TOKEN}`)

export async function getProjectsList(): Promise<GitHubProjectSchema[]> {
  // Fetch the data
  const response = await fetch(`https://api.github.com/users/${GITHUB_USERNAME}/repos`, { headers })
  const repos = await response.json()

  const projects: GitHubProjectSchema[] = []

  // Create a new object for each repo + get README
  for (const repo of repos) {
    const project = GitHubProjectSchema.parse({
      name: repo.name,
      url: repo.html_url,
      description: repo.description,
      stars: repo.stargazers_count,
      order: -repo.stargazers_count,
    })

    // Get the README.md static file
    const readmeResponse = await fetch(
      `https://raw.githubusercontent.com/${GITHUB_USERNAME}/${repo.name}/${repo.default_branch}/README.md`,
      { headers },
    )
    const readme = await readmeResponse.text()
    project.readme = readme
    projects.push(project)
  }

  return projects
}

I’ll also create a type for it:

// src/types.ts
export const LinkSchema = z.object({
  title: z.string(),
  href: z.string(),
  icon: z.string(),
})
export type LinkSchema = z.infer<typeof LinkSchema>

export const GitHubProjectSchema = z.object({
  name: z.string(),
  title: z.string(),
  description: z.string().nullable(),
  url: z.string(),
  stars: z.number(),
  readme: z.string().optional(),
  order: z.number(),
  links: z.array(LinkSchema),
})
export type GitHubProjectSchema = z.infer<typeof GitHubProjectSchema>

Seems like quite a bit of code, but it’s really not complex at all. Here’s a rundown:

  1. Fetch the data from /users/${GITHUB_USERNAME}/repos, which gives us all the basic information about each repository
  2. Fetch the README from the raw file link for each repository
  3. Put it all together and push into an array

Now, I’ll just use that instead of the original in index.astro, just so I can test what it shows and that everything is working.

const projects = await getProjectsList()
projects.sort((a, b) => {
  return a.order - b.order
})

// in the template itself:
projects.map((project) => (
  <code>
    <pre>{JSON.stringify(project, null, 2)}</pre>
  </code>
))

I removed the previous ProjectCard component for now because we haven’t updated it yet, and I just wanna make sure things are working.

Let’s refresh the page, and… Well, this is taking a couple of seconds, but okay, nothing major at all. It loaded! Wait… this is only 30 projects. Pretty sure I have more than that, even just the public ones.

Hmm… Oh, here, I found the problem.

Handling pagination

Seems like I’ve hit a small snag, but it’s all well documented, so I can only blame my own laziness.

Here is an excerpt from the GitHub API documentation for /{user}/repos:

page - integer The page number of the results to fetch. For more information, see “Using pagination in the REST API.

Okay, let’s implement pagination to get everything:

async function fetchRepos() {
  const repos = []
  let page = 1

  let response = await fetchReposPage(page)
  repos.push(...response.repos)

  while (response.url) {
    response = await fetchReposPage(++page)
    repos.push(...response.repos)
  }

  return repos
}

async function fetchReposPage(page: number) {
  const response = await fetch(
    `https://api.github.com/users/${GITHUB_USERNAME}/repos?page=${page}&per_page=100`,
    { headers },
  )

  const repos = await response.json()
  const links = response.headers.get('link')

  let url: string | null = null

  if (links) {
    const next = links.split(',').find((link) => link.includes('rel="next"'))
    if (next) {
      url = next.split(';', 1).toString().slice(1, -1)
    }
  }
  return { repos, url }
}

I took some shortcuts — I used an iterating number instead of using the actual page link. It should still work, and there is no unknown data in the actual link. I just need to make sure it exists before trying.

Also, I could have totally made the fetching logic recursive, which might have looked cleaner than a while loop. But I don’t wanna stress over this too much. Overengineering is bleh.

Let’s try it now, and… it works! That didn’t take me multiple iterations to get it right at all 😇.

Well, I have all the data I could want for now. On to the next step.

Generating Markdown files

Astro consumes Markdown files, which allow me to also do some fancier rendering for each project rather than just showing text. You can add metadata (often called “frontmatter”) at the top of the file, like so:

---
title: Simple Scaffold
order: 200
links:
  - href: https://www.npmjs.com/package/simple-scaffold
    icon: logo-npm
    title: NPM
description: |-
  <p class="mb-0">A simple command to generate any file structure, from single components to entire app boilerplates.</p>
  <p>
    See
    <a href="/2019/03/simple-scaffold">my post</a>
    detailing more, or the
    <a href="https://chenasraf.github.io/simple-scaffold">documentation</a>
    /
    <a href="https://www.npmjs.com/package/simple-scaffold">NPM</a>
    page to start using it!
  </p>
---

<!-- Markdown content may go here -->

So, I want to generate a similar file, but populated with our new data from GitHub.

I also want to be able to add custom information or add overrides, which won’t be destroyed if I happen to want to re-fetch repositories. I’ll think about that later; for now, let’s get this new task done.

Creating the file

While this is Markdown at the bottom, the top looks a lot like a YAML file.

First thing’s first: I’ll move the original files to a different directory, so I don’t destroy them right now while working. Also, I will probably want to keep some of the data there as overrides or additions to what the repo info gives me.

The next step is to convert the structure I already have to YAML. I am trying to decide whether to use a package or just implement a simple string file writer for this. The latter option is cool, but I’m sure there will be many problems with correctly escaping quotes and other YAML tokens in the output, so I prefer just using a pre-made package for this. I quickly found yaml, and it seems like a perfect option for that.

This part was surprisingly simple — only a few lines:

async function generateProjectFile({ readme, ...project }: GitHubProjectSchema) {
  const file = path.join(projectsDir, `${project.name.toLowerCase()}.md`)
  const lines = ['---']
  lines.push(yaml.stringify(project).trim())
  lines.push('---')
  if (readme) {
    lines.push(readme)
  }
  return fs.writeFile(file, lines.join('\n'))
}

export async function generateProjectsList() {
  const projects = await getProjectsList()
  await Promise.all(projects.map(generateProjectFile))
}

That already generates all the files for me. Success! I can now revert some of the index code back to load it back from the original collection:

await generateProjectsList()
const projects = await getCollection('project')
projects.sort((a, b) => {
  return a.data.order - b.data.order
})

And also update the correct types for Astro:

// src/content/config.ts

const project = defineCollection({
  schema: GitHubProjectSchema,
})

export const collections = { project }

And now my files look like this:

---
name: simple-scaffold
description: >-
  Generate any file structure &mdash; from single components to entire app boilerplates, with a
  single command.
url: https://github.com/chenasraf/simple-scaffold
stars: 53
order: -53
links: []
---

<p align="center">
  <img src="https://chenasraf.github.io//simple-scaffold/img/logo-lg.png" alt="Logo" />
</p>
<!-- Rest of contents of the README.md (too big to show here) -->

This already looks pretty solid. If I just update the ProjectCard props and use it again, I should already see a nice list of projects. Let’s do that:

---
// src/pages/projects/index.astro
---
<div class="grid grid-cols-1 md:grid-cols-2 gap-8">
  {projects.map((project) => <ProjectCard project={project} />)}
</div>
---
// src/components/ProjectCard.astro
import type { CollectionEntry } from 'astro:content'
import ProjectLinks from './ProjectLinks.astro'
type Props = { project: CollectionEntry<'project'> }
const { project } = Astro.props
const { title, description, links } = project.data
---

<style>
  .project-card p {
    @apply mt-0;
    @apply mb-0;
  }
</style>
<div
  class:list={[
    'project-card cursor-default transition duration-300',
    'border dark:border-none',
    'shadow-generic hover:shadow-generic-hover',
    'dark:shadow-generic-dark dark:hover:shadow-generic-dark-hover',
    'rounded-2xl p-6',
  ]}
>
  <h3 class="mt-0">
    <a href={`/projects/${project.slug}`}>{title}</a>
  </h3>
  <div set:html={description} />
  <ProjectLinks links={links} />
</div>

Success! I see my GitHub projects all nicely laid out for me on the projects page.

Let’s update the details page so that clicking it will lead to the actual information.

There are 2 main things to change; first, let’s update the props so they reflect the new md files:

// src/pages/projects/[slug].astro
const post = Astro.props
const { name, description, links } = post.data
const { Content } = await post.render()

You will notice the const { Content } = await post.render() line there at the bottom.

When importing the md file as part of a collection, the actual contents of the md file are already parsed for Markdown and are returned as a component I can inject into the page.

The metadata at the top (the frontmatter) is added alongside the content as additional props for use.

Now we can simply plug it into the template:

---
// src/pages/projects/[slug].astro
// ...
---
<Page showTitle prose title={name} description={description ?? ''}>
  <p set:html={description} />
  <div class="no-prose">
    <ProjectLinks links={links} />
  </div>
  <Prose><Content /></Prose>
</Page>

We now have a fully functioning projects list, along with a singular project page for each repo. Great!

Enriching the data

The proof-of-concept is done. Now we can enrich the data a bit more, and maybe update the design. I also already cached the repository responses into JSON files, so that the process can start over later without having to re-fetch everything.

Let’s add some kind of filters to the project page. The easiest win is to add the GitHub link to the links array:

async function getProjectsList(): Promise<GitHubProjectSchema[]> {
  // ...

  for (const repo of repos) {
    const project = GitHubProjectSchema.parse({
      name: repo.name,
      title: repo.name,
      url: repo.html_url,
      description: repo.description,
      stars: repo.stargazers_count,
      order: -repo.stargazers_count,
      // There 👇🏼
      links: [{ href: repo.html_url, icon: 'logo-github', title: 'GitHub' }],
    })
  }

  // ...
}

Now, I’ll put my old files in a directory called project-overrides. We will use this to manually add override fields to any repo we choose.

We’ll start by loading the corresponding file:

if (await fileExists(overridesFile)) {
  const content = await fs.readFile(overridesFile, 'utf8')
  const allLines = content.toString().split('\n')
  const lines = allLines.slice(0, content.lastIndexOf('---')).join('\n')
  const obj = jsYaml.parse(lines) as GitHubProjectSchema
}

Then loading the overrides information there:

for (const link of obj.links ?? []) {
  const found = project.links.findIndex((i) => i.href === link.href)
  if (found >= 0) {
    project.links.splice(found, 1, link)
  } else {
    project.links.push(link)
  }
}
for (const key of ['title', 'order', 'description'] as const) {
  if (obj[key as keyof typeof obj] != null) {
    project[key as keyof typeof project] = obj[key as keyof typeof obj] as never
  }
}

I wanted to avoid duplicated links, so I let the overrides take precedence if there are any conflicts.

Filtering out projects

I also want to filter out any irrelevant projects. I’ll do that by making a few checks before I push into the projects array.

Something like this:

const overridesDir = path.join(process.cwd(), 'src', 'content', 'project-overrides')
const projectIgnoreFile = path.join(overridesDir, '.projectignore')
let projectIgnore: string[] = []

fs.readFile(projectIgnoreFile, 'utf8').then((content) => {
  projectIgnore = content
    .split('\n')
    .map((i) => i.trim())
    .filter(Boolean)
})

function projectFilter(project: Record<string, any>): boolean {
  if (projectIgnore.includes(project.name)) {
    return false
  }
  return [
    !project.fork,
    project.stargazers_count > 0,
    //
  ].every(Boolean)
}

As you can see, I exclude:

  1. Project names listed in the ignore file
  2. Forks
  3. Projects without stars

I can always tweak this further if I need to, but the regular filters got me most of the way, while I filled the gap with old (but starred) projects in the ignore file.

Now, I can add it to the main repo loop, simply using continue to skip the current project:

async function getProjectsList(): Promise<GitHubProjectSchema[]> {
  const repos = await fetchRepos()
  const projects: GitHubProjectSchema[] = []

  for (const repo of repos) {
    if (!projectFilter(repo)) {
      continue
    }
    // ...
    projects.push(project)
  }
  // ...
}

Well, that’s easy — I can just add a new property to GitHubProjectSchema, called featured. It’s a simple boolean, and when I want, I can add an override for that field; there shouldn’t be many of them anyway, and it’s more likely they will already have overrides. Even if not, having a file with 3 lines isn’t that terrible.

Let’s update the types:

// src/types.ts
export const GitHubProjectSchema = z.object({
  name: z.string(),
  title: z.string(),
  description: z.string().nullable(),
  url: z.string(),
  stars: z.number(),
  readme: z.string().optional(),
  order: z.number(),
  links: z.array(LinkSchema),
  // here:
  featured: z.boolean().optional().default(false),
})

And add the implementation to the overrides loading:

// just add the 'featured' property to the list of overridable fields:
for (const key of ['title', 'order', 'description', 'featured'] as const) {
  // original logic
}

As for changing how featured projects are displayed, that’s up to you. Personally, I partitioned them into 2 arrays, and had the featured ones at the top with a bit of a difference in the design, while the other ones were separated to the bottom and also a bit smaller in size.

Finishing touches

I have the basic structure working well. I could call it a day, but I will also update some styles in the new pages, re-order the information, stuff like that.

There are also some other ideas I can think of to improve this:

  • Adding releases links
  • Adding license
  • Smarter filtering
  • Search feature
  • Show project tags
  • Populate with project URL from the repo info

But all of these can wait for another day.

Wrapping it all up

The last thing I need is to make sure it’s easy to update the project files whenever I need to make an update.

  • I have a CI that deploys the site on every push to master
  • I would like a way to trigger the changes without necessarily making an actual change to a real file

I have a few ideas in mind. To manually trigger a rebuild, I could do one of the following:

  • Use manually triggered GitHub Actions
  • Commit some sort of hash file, which will change when needed, causing the build to trigger
  • Add the files to gitignore, which will make sure every GitHub Action run goes and fetches the most up-to-date information

I think adding the files to gitignore and manually triggering GitHub Actions would solve both of my issues.

Updating the workflow is easy; all I have to do is add the workflow_dispatch event to be able to manually trigger my deploy action:

# .github/workflows/deploy.yaml

name: Deploy
on:
  push:
    branches:
      - master
  workflow_dispatch:

And to add the files to gitignore:

# .gitignore
src/content/project

Uh-oh

Seems there is a major snag.

In Astro, this all works pretty well in dev mode, but when using the astro build command, the newly created files aren’t noticed by the build system (actually not really, I should have noticed I need an extra refresh after building all the files — it was a hint).

Apparently, the file list needs to be refreshed, and I don’t think that’s quite possible.

So… what now?

I decided simpler is better. Let’s add this build step as a prebuild command, separate from Astro. This will make it work.

There is one caveat, and that is we can’t use any of Astro’s stuff. So, if I want the types in the script, I will have to replicate them (and worry about inconsistencies in the future) using my own zod import, not the one that is bundled with Astro:

// src/github-projects.ts
const LinkSchema = z.object({
  title: z.string(),
  href: z.string(),
  icon: z.string(),
})
type LinkSchema = z.infer<typeof LinkSchema>

const GitHubProjectSchema = z.object({
  name: z.string(),
  title: z.string(),
  description: z.string().nullable(),
  url: z.string(),
  stars: z.number(),
  readme: z.string().optional(),
  order: z.number(),
  links: z.array(LinkSchema),
  featured: z.boolean().optional().default(false),
})
type GitHubProjectSchema = z.infer<typeof GitHubProjectSchema>

Also, I needed to change the way I access env vars:

// from:
const GITHUB_TOKEN = import.meta.env.GH_TOKEN || import.meta.env.GITHUB_TOKEN!
// to:
const GITHUB_TOKEN = process.env.GH_TOKEN || process.env.GITHUB_TOKEN!

And then I created a standalone script to run this:

import { generateProjectsList } from '../github-projects'
generateProjectsList()

And made it run on pre-build in package.json:

{
  "scripts": {
    "prebuild": "tsx src/scripts/generate-projects.ts"
  }
}

I also removed the generate call from the astro pages.

And now it works!

Conclusion

That’s it! You can take a look at my Projects page to see how it turned out.

While it’s not perfect, I think it’s an interesting direction to take, and there are many possibilities for improving how and what to display. Have you done anything similar for yourself? How do you make sure to keep your projects up to date? Leave a comment below telling us more.

Did you like this article? Share it or subscribe for more:
About the author

My name is Chen Asraf. I’m a programmer at heart — it's both my job that I love and my favorite hobby. Professionally, I make fully fledged, production-ready web, desktop and apps for start-ups and businesses; or consult, advise and help train teams.

I'm passionate about tech, problem solving and building things that people love. Find me on social media:

Me!