I love to share what I've learnt with others. Check out my blog posts and notes about my
academic research, as well as technical solutions on software engineering and data
science challenges. Opinions expressed in this blog are solely my own.
I am glad to share that we have recently submitted two manuscripts to academic journals for peer review. Please check the Publications page for details. 😁
As I am participating in the MDTF project, I need some sample climate data to test my POD. NOAA has a Google Cloud repository that stores CMIP6 data. To download the data, I need the gsutil installed on my linux machine.
I created the minimally necessary conda evironment by
This year’s Nobel Prize of Physics is granted to the inventers of Artificial Neural Network (ANN). As someone who have worked on both Physics and Machine Learning, I wonder what this implies - the power of ANN lies in making predictions without understanding the underlying mechanisms, while physics is precisely about making predictions by finding out the underlying mechanisms. Is that a shift in regime? 🤔
Regardless, the rise of ANN has created tonnes of job opportunities for physicists, which is an invaluable contributions to the physics community (as faculty/staff scientist positions are very limited). As an international PhD grad in the US, I’m glad that USCIS can no longer complain about Physics degrees being irrelevant to machine learning for H-1B Visa applications! 🥳🎉
The python native library traceback can provide more details about an (unexpected) error compared to error catching with except Exception as ex: and then examine ex.
Let’s make a function that would result in error for demonstration:
importtracebackdefdo_something_wrong():cc=int("come on!")returncctry:# first catch
do_something_wrong()exceptExceptionasex:print(f"The Exception is here:\n{ex}")try:# second catch
do_something_wrong()except:print(f"Use traceback.format_exc() instead:\n{traceback.format_exc()}")
The first catch would only display
The Exception is here:
invalid literal for int() with base 10: 'come on!'
while the second catch includes not only the error but where it occurs
I wanted to create a GitHub workflow that did web API query and return the results as text files in the repo. Here are several problems I’ve solved during the development.
Passing keys and tokens via secrets to the web API
Several tokens and secrets are necessary to query the web API. I stored that as GitHub secrets and access them in the workflow file via:
After running run_script.py, there will be several .txt files produced in the directory data_dir/ inside the repository which I want to push to the GitHub repository. I tried committing and pushing the files with actions/checkout@v4 but it does not work:
...- name:add files to git# Below is a version that does not workuses:actions/checkout@v4with:token:${{ secrets.REPO_TOKEN }}-name:do the actual pushrun:|git add data_dir/*.txtgit commit -m "add files"git push
Running this, I receive an error: nothing to commit, working tree clean. Error: Process completed with exit code 1. .
The version that works eventually looks like this:
Note that it would commit all files produced to the repository, including some unwanted cached files. Therefore, I included a step before this to clean up the files:
A new release (v2.0.0) of the python package falwa has been published to cope with the deprecation of numpy.disutils in python 3.12 and involves some changes in installation procedures, which you can find in README section “Package Installation”.
Great thanks to Christopher Polster for figuring out a timely and clean solution for migration to python 3.12. 👏 For details and references related to this migration, users can refer to Christopher’s Pull request.
To train deep learning model written in PyTorch with Big Data in a distributed manner, we use BigDL-Orca at work. 🛠️
Compared to the Keras interface of BigDL, PyTorch (Orca) supports customization of various components for Deep Learning. For example, using bigdl-dllib keras API, you are constrained to use only available operations in Autograd module to customize loss functions, while you can do whatever you like in PyTorch (Orca) by creating customized subclass of torch.nn.modules.loss._Loss . 😁
One drawback of Orca, though, is the mysterious error logging, as what happened within the java class (i.e. what causes the error) is not logged at all. I got stuck in error during model training, but what I got from the Spark log was just socket timeout . There can be many possibilities, but the one I encountered was about the size of train_data.
Great thanks to my colleague Kevin Mueller who figured out the cause 🙏 - when the partitions contain different number of batches in Orca, some barriers can never be reached and that results in such error.
To get around this, I dropped some rows to make sure the total size of train_data is a multiple of batch size:
I wrote a blog post in 2021 about how to integrate pytest coverage check to GitHub Workflow.
To run coverage locally, execute coverage run --source=falwa -m pytest tests/ && coverage report -m would yield the report (this is from the PR for falwa release 1.3):
Our team lead shared with us some useful learning materials on advanced CS topics not covered in class: The Missing Semester of Your CS Education from MIT. I’ll spend some time to read this.
Below is the email I sent to the users of GitHub repo hn2016_falwa:
I am writing to inform you two recent releases of the GitHub repo v1.0.0 (major release) and v1.1.0 (a bug fix). You can refer to the release notes for the details. There are two important changes in v1.0.0:
The python package is renamed from hn2016_falwa to falwa since this package implements finite-amplitude wave activity and flux calculation beyond those published in Huang and Nakamura (2016). The GitHub repo URL remains the same: https://github.com/csyhuang/hn2016_falwa/ . The package can be installed via pip as well: https://pypi.org/project/falwa/
It happens that the bug fix release v0.7.2 has a bug in the code such that it over-corrects the nonlinear zonal advective flux term. v1.0.0 fix this bug. Thanks Christopher Polster for spotting the error. The fix requires re-compilation of fortran modules.
The rest of the details can be found in the release notes:
[Updated on 2023/12/11] After some research, it seems that scikit-build would be a continuously maintained solution: https://scikit-build.readthedocs.io/
We published an important bugfix release hn2016_falwa v0.7.2, which requires recompilation of fortran modules.
Two weeks ago, we discovered that there is a mistake in the derivation of expression of nonlinear zonal advective flux term, which leads to an underestimation of the nonlinear zonal advective flux component.
We will submit corrigendum for Neal et al. (2022, GRL) and Nakamura and Huang (2018, Science) to update the numerical results. The correct derivation of the flux expression can be found in the corrected supplementary materials of NH18 (to be submitted soon). There is no change in conclusions in any of the articles.
Please refer to Issue #83 for the numerical details and preliminary updated figures in NHN22 and NH18:
Thank you for your attention and let us know if we can help.