Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

How to limit the number of Twitter posts in your timeline using JavaScript BLOG

less than 1 minute read

Published:

I have a bit of a problem scrolling too much of the awesome content on Twitter. There are just too many other people creating cool ideas and software! At first I justified it by the fact that I find academic papers (my feed is curated to mostly academic CS/ML/SE), but now I admit it’s too much! In a bid to waste more time in order to waste less time, I created a simple script that puts an extra UI onto each post in my Twitter feed:

  • For the first five posts, it numbers them to remind me how many I’ve browsed.
  • After that, it blocks them out with a reminder of the limit I set (currently I keep it at five posts).

How to fix docker-compose: command not found error with newer versions of Docker BLOG

less than 1 minute read

Published:

The docker-compose command is missing from recent versions of Docker, replaced by a plugin built into Docker: docker compose. To restore compatibility with scripts which use docker-compose, we can create a wrapper script which forwards its arguments to docker compose. Here’s the script:

# Switch to root
sudo su -
# Write script to file
cat << EOF > /usr/local/bin/docker-compose
#!/bin/bash
docker compose $@
EOF
# Make the script executable so that we can invoke it directly from the shell
chmod +x /usr/local/bin/docker-compose

Siamese network and triplet loss

less than 1 minute read

Published:

Siamese network is an architecture which runs two networks with shared weights (effectively runs the same network twice) on two different inputs simultaneously. It is commonly trained with a contrastive loss such as triplet loss in order to draw together the representations of similar inputs and push apart the representations of contrasting inputs.

Get filepath of Bash activation script

less than 1 minute read

Published:

Use ${BASH_SOURCE[0]} to reference the filepath of a Bash script. Unlike $0, this works if the script is called via bash script.sh or source script.sh.

Beware any vs len

less than 1 minute read

Published:

I fell into the habit of using any() to check if a list is empty. It’s nice because it works for any enumerable, including generators, even if len() is not defined. However, it has a pitfall where if the list is nonempty but contains only falsy values, any() returns False. For this reason, I advise to use len() to check if a list is empty.

Use head -n -0 to get all items in list

less than 1 minute read

Published:

You may know that you can use head -n $n to get the first N lines of a list. But you may not know that you can supply n=-0 to get all items in the list.

Use hard links to replicate log files in a generated directory

1 minute read

Published:

Recently I wanted to write program logs to a file, then copy it to the directory where I store my checkpoints. Because I want to log things before I create the checkpoint directory, I attached my logger to a file in my root directory.

Beware metric auto-reduce with PyTorch Lightning + TorchMetrics

1 minute read

Published:

PyTorch Lightning + TorchMetrics can log metrics per step and per epoch. It also has MetricCollection, which can be used to compute several metrics at once, getting rid of redundant code. Here is how I have it set up:

Print pandas Series as percent

less than 1 minute read

Published:

Use this snippet to print a pd.Series of floats as percentages.

.apply(lambda x: x * 100).apply("{:,.2f}%".format)

Encoder-decoder models

less than 1 minute read

Published:

Encoder decoder is a deep neural network architecture that consists of 2 components:

  • Encoder: input -> encoder memory (real-valued vector),
  • Decoder: encoder memory -> output. Input is variable length, encoder memory is fixed length.

Print item keys with jq

less than 1 minute read

Published:

I can use jq to print the keys in an array of JSON objects quickly.

jq '.[0] | keys[]' $filename

Neural Network Embeddings

less than 1 minute read

Published:

Embeddings are used to map high-dimensional data to low-dimensional floating-point representation. This may improve the performance because it improves the representation of the input given to the model.

Meditations on Meditations: I-IV BLOG

10 minute read

Published:

I’m reading Marcus Aurelius’s Meditations in Marcus Aurelius and His Times (Transition from Paganism to Christianity) [0].

No pain no gain? Comparing 3 program analysis frameworks for C BLOG

11 minute read

Published:

Program analysis methods often represent programs as graphs. These graphs should be automatically generated from the source code. There are many tools that have been implemented to do this, but they are often painful to set up. In this post, I will compare 3 program analysis frameworks which I have used to generate graph representations of C programs.

publications

Validating static warnings via testing code fragments PAPER

Published in ISSTA, 2021

In this paper, we present a novel solution that automatically generates test cases based on static warnings to validate true and false positives.

Recommended citation: Ashwin Kallingal Joshy, Xueyuan Chen, Benjamin Steenhoek, and Wei Le. 2021. Validating static warnings via testing code fragments. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2021). Association for Computing Machinery, New York, NY, USA, 540–552. https://doi.org/10.1145/3460319.3464832

Refactoring programs to improve the performance of deep learning for vulnerability detection THESIS

Published in Iowa State University, ProQuest Dissertations & Theses Global, 2021

This paper is about refactoring programs as a method of data augmentation.

Recommended citation: Steenhoek, Benjamin. (2021). Refactoring programs to improve the performance of deep learning for vulnerability detection (Order No. 28648161). Available from Dissertations & Theses @ Iowa State University; ProQuest Dissertations & Theses Global. (2625295478). https://www.proquest.com/dissertations-theses/refactoring-programs-improve-performance-deep/docview/2625295478/se-2?accountid=10906

Refactoring programs to improve the performance of deep learning for vulnerability detection POSTER

Published in Iowa State University 6th Annual Research Day, 2022

This poster is about refactoring programs as a method of data augmentation.

Recommended citation: Steenhoek, Benjamin. (2022). Refactoring programs to improve the performance of deep learning for vulnerability detection (Poster). Presented at: Iowa State University 6th Annual Research Day. https://benjijang.com/files/2022-04-01-poster.pdf

An Empirical Study of Open-Source Development Practices for Safety Certified Software COURSE PROJECT

Published in Iowa State University, COM S 515 Final Project, 2022

This paper extends a dataset of open-source safety-critical software with details about the project’s development practices and safety goals.

Recommended citation: Steenhoek, Benjamin. (2022). An Empirical Study of Open-Source Development Practices for Safety Certified Software (Final Project). Iowa State University COM S 515. https://benjijang.com/files/2022-04-26-coms515-opensource.pdf

An Empirical Study of Deep Learning Models for Vulnerability Detection PAPER

Published in ICSE, 2023

In this paper, we surveyed and reproduced 9 state-of-the-art (SOTA) deep learning models on 2 widely used vulnerability detection datasets: Devign and MSR.

Recommended citation: Benjamin Steenhoek, Md Mahbubur Rahman, Richard Jiles, and Wei Le. 2023. An Empirical Study of Deep Learning Models for Vulnerability Detection. In Proceedings of the 45th International Conference on Software Engineering (ICSE 2023). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.48550/arXiv.2212.08109

Do Language Models Learn Semantics of Code? A Case Study in Vulnerability Detection PAPER

Published in ArXiv, 2023

In this paper, we analyze language models to investigate whether the models have learned the semantics of code relevant to vulnerability detection, namely bug semantics, and if so, how the alignment to bug semantics relates to model performance.

Recommended citation: Benjamin Steenhoek, Md Mahbubur Rahman, Shaila Sharmin, & Wei Le. (2023). Do Language Models Learn Semantics of Code? A Case Study in Vulnerability Detection. ArXiv. https://arxiv.org/abs/2311.04109

A Comprehensive Study of the Capabilities of Large Language Models for Vulnerability Detection PAPER

Published in ArXiv, 2024

We evaluated the vulnerability detection capabilities of SOTA LLMs. We systematically searched for the best-performing prompts and analyzed their reasoning.

Recommended citation: Benjamin Steenhoek, Md Mahbubur Rahman, Monoshi Kumar Roy, Mirza Sanjida Alam, Earl T Barr, and Wei Le. 2024. A Comprehensive Study of the Capabilities of Large Language Models for Vulnerability Detection. ArXiv. https://arxiv.org/pdf/2403.17218

TRACED: Execution-aware Pre-training for Source Code PAPER

Published in ICSE, 2024

We introduce TRACED, an execution-aware pre-training strategy for source code wherein we pre-train code language models with a combination of source code, executable inputs, and corresponding execution traces.

Recommended citation: Yangruibo Ding, Benjamin Steenhoek, Kexin Pei, Gail Kaiser, Wei Le, and Baishakhi Ray. 2024. TRACED: Execution-aware Pre-training for Source Code. In 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24), April 14–20, 2024, Lisbon, Portugal. ACM, New York, NY, USA, 12 pages. https://doi.org/10.48550/arXiv.2306.07487

Dataflow Analysis-Inspired Deep Learning for Efficient Vulnerability Detection PAPER

Published in ICSE, 2024

We present DeepDFA, a dataflow analysis-inspired graph learning framework and an embedding technique that enables graph learning to simulate dataflow computation.

Recommended citation: Benjamin Steenhoek, Hongyang Gao, and Wei Le. 2024. Dataflow Analysis-Inspired Deep Learning for Efficient Vulnerability Detection. In 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24), April 14–20, 2024, Lisbon, Portugal. ACM, New York, NY, USA, 13 pages. https://doi.org/10.48550/arXiv.2212.08108