Posts by Collection

portfolio

Measuring Urban Surface Reflectivity and Heat Mitigation Potential at High-Resolution with Remote Sensing and Machine Learning

At WRI, I worked on a project to identify the surface reflectivity of roofs and pavements in urban areas using machine learning and remote sensing. We built on the methods developed in Ban-Weiss et al. 2015a & 2015b and scaled them through cloud computing and machine learning. We used official footprint data from LA city, Microsoft building footprints and OpenStreetMap/SharedStreet API to get geometries of roofs and streets. Using open-source satellite imagery from National Agriculture Imagery Program (NAIP), ground truth measurements collected through project partners, and regression machine learning, we created high-resolution map of surface reflectivity for multiple urban areas in the United States. The resulting data and maps provide an estimate of the existing surface reflectivity at a building and street-segment scale which can be superimposed with current heat vulnerability, green infrastructure, urban morphology, and urban heat data. This tool serves cities in developing and evaluating urban heat island reduction strategies and promoting extensive adoption of urban heat mitigation programs.

Prediction of mean albedo for every roof/street in LA city between 2009 and 2018. Credit: WRI/Microsoft AI for Earth/Global Cool Cities Alliance/City of Los Angeles/George Ban-Weiss/Sika AG/Federal Highway Administration Albedo Study/James E Alleman/Michael Heitzman.

Object Detection using YOLO model

YOLO (You Only Look Once) is a state-of-the-art object detection model that is fast and produces results with high accuracy. This algorithm was trained to run on 608x608 images and requires only one forward propagation pass through the network to make predictions. In this project, I used pre-trained weights from the YOLOv2 model to detect cars in a dataset. The YOLO architecture runs each input image through a deep convolutional neural network. After running the model through the images, it returned all the predicted boxes for each image. The boxes were then filtered by thresholding on object and class confidence to remove boxes with low probability. A second filtering was applied using Intersection over Union (IoU) thresholding to remove overlapping boxes. The final output was one bounding box for each object with a predicted score and class.

I used the model to detect cars in my driveway! Credit: Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You Only Look Once: Unified, Real-Time Object Detection (2015), Allan Zelener

A Deep Convolutional Neural Network (CNN) to Classify Photos of Dogs and Cats

Using dataset from a Kaggle machine learning competition, I wrote an algorithm to classify whether images contain either a dog or a cat. The Kaggle competition provided 25,000 labeled photos, which included 12,500 dog photos and an equal number of cat photos. The dataset was originally developed by Microsoft. I used Keras API to solve this problem. I took two approaches: (1) train a model from scratch (2) use transfer learning to train a pretrained model. For the first approach, I followed the general architectural principles of the VGG models. I stacked three convolutional layers with 3×3 filters followed by a max pooling layer to create each block and repeated the process to create a three block network. Other important hyperparameters involved ‘same’ padding, 0.001 learning rate, and ‘binary cross-entropy’ as the loss function. For the second approach, I loaded a VGG-16 model, removed the fully connected layers, froze the weights of all of the convolutional layers, and trained new fully connected layers using the Kaggle dataset. The second approach produced a higher accuracy in a shorter time.

Training history for the two approaches.

I classified photos of some of my co-worker’s pets using the 2nd model! Credit: Kaggle, Machine Learning Mastery, Microsoft

Residual Network (ResNet) for Identifying the Digit Represented by a Hand Sign

One of the challenges of training a very deep neural network is the vanishing and exploding gradient types of problems. When we make the network deeper and deeper, it becomes very difficult for it to choose parameters that learn even the identity function. So, the performance downgrades as the network get deeper. This problem can be significantly reduced by using Residual Networks (ResNets). ResNets use skip connections that take the activation from one layer and feed it to another layer much deeper in the network. It reduces the vanishing gradient problem, and the deeper layers can easily learn the identity function, which ensures that the performance will not degrade in deeper layers.
The project is about classifying the digit represented by a hand sign. The dataset consists of 1080 hand images for training and 120 images for validation.

Visual representation of the different classes. Credit: DeepLearning.AI
I created the necessary building blocks to train a ResNet-50 model from scratch. Here is a summary of the model architecture:

Credit: DeepLearning.AI, Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
The convolution and identity block has three convolution layers each, with convolution blocks having an additional convolution layer in a shortcut path to make sure the dimensions match up for the later stages. I trained the model for eight epochs and it achieved around 94% validation accuracy! Here is the training history:


I used the model to classify an image of hand sign!

publications

Identifying and forecasting potential biophysical risk areas within a tropical mangrove ecosystem using multi-sensor data

Published in International Journal of Applied Earth Observations & Geoinformation, 2018

Mangroves are one of the most productive ecosystems known for provisioning of various ecosystem goods and services. They help in sequestering large amounts of carbon, protecting coastline against erosion, and reducing impacts of natural disasters such as hurricanes. Bhitarkanika Wildlife Sanctuary in Odisha harbors the second largest mangrove ecosystem in India. This study used Terra, Landsat and Sentinel-1 satellite data for spatio-temporal monitoring of mangrove forest within Bhitarkanika Wildlife Sanctuary between 2000 and 2016. Three biophysical parameters were used to assess mangrove ecosystem health: leaf chlorophyll (CHL), Leaf Area Index (LAI), and Gross Primary Productivity (GPP). A long-term analysis of meteorological data such as precipitation and temperature was performed to determine an association between these parameters and mangrove biophysical characteristics. The correlation between meteorological parameters and mangrove biophysical characteristics enabled forecasting of mangrove health and productivity for year 2050 by incorporating IPCC projected climate data. A historical analysis of land cover maps was also performed using Landsat 5 and 8 data to determine changes in mangrove area estimates in years 1995, 2004 and 2017. There was a decrease in dense mangrove extent with an increase in open mangroves and agricultural area. Despite conservation efforts, the current extent of dense mangrove is projected to decrease up to 10% by the year 2050. All three biophysical characteristics including GPP, LAI and CHL, are projected to experience a net decrease of 7.7%, 20.83% and 25.96% respectively by 2050 compared to the mean annual value in 2016. This study will help the Forest Department, Government of Odisha in managing and taking appropriate decisions for conserving and sustaining the remaining mangrove forest under the changing climate and developmental activities.

Recommended citation: Shrestha, S., I. Miranda, A. Kumar, M. Pardo, S. Dahal, T. Rashid, C. Remillard, and D.R. Mishra. (2018). Identifying and forecasting potential biophysical risk areas within a tropical mangrove ecosystem using multi-sensor data, International Journal of Applied Earth Observations & Geoinformation. https://www.sciencedirect.com/science/article/abs/pii/S0303243418302940#!

Measuring Urban Surface Reflectivity and Heat Mitigation Potential at High-Resolution with Remote Sensing and Machine Learning

Published in AGU Fall Meeting, 2019

Urban spaces expanded significantly in the past few decades and this trend is expected to continue in the future. The rapid growth of modern cities reduces the greenspaces and increases the amount of heat absorbent surfaces which alters of the local climate by trapping more heat from solar radiation and in turn increasing the temperature of urban areas, known as the urban heat island effect. The effects are more prominent in the central parts of cities and can cause severe risk to human health. The heat island effect can be reduced by increasing urban forestry and installing cool roofs and pavements with high solar reflectance. But cities lack and are seeking ways to target and meaningfully measure progress on heat mitigation. There is currently no cost-effective, easily repeatable and scalable way to measure urban surface changes. The lack of concrete measurability slows the adoption of urban heat mitigation policies. Cities are also seeking a scientifically sound way to select interventions and spatially target heat policy and projects to maximize the effectiveness of limited budgets. Producing open-source methods to generate a time-series of high-resolution maps of urban roof and pavement albedos will help to fill this need for large geographies at low cost.
We present an automated workflow to monitor the surface reflectivity of roofs and pavements in urban areas. We built on the methods developed in Ban-Weiss et al. 2015a & 2015b and scale them through cloud computing and machine learning. We use Microsoft building footprints and OpenStreetMap/SharedStreet API to get geometries of roofs and streets. Using open-source satellite imagery from National Agriculture Imagery Program (NAIP), ground truth measurements collected through project partners, and regression machine learning we create a high-resolution map of surface reflectivity for multiple urban areas in the United States for multiple time periods. The resulting data and maps provide an estimate of the existing surface reflectivity at a building and street-segment scale which can be superimposed with current heat vulnerability, green infrastructure, urban morphology, and urban heat data. This tool serves cities in developing and evaluating urban heat island reduction strategies and promoting extensive adoption of urban heat mitigation programs.

Recommended citation: Rashid, T., Mackres, E., Guzder-Williams, B.P., Kerins, P., Pietraszkiewicz, E. (2019), Measuring Urban Surface Reflectivity and Heat Mitigation Potential at High-Resolution with Remote Sensing and Machine Learning, Abstract [GC21I-1363] presented at 2019 Fall Meeting, AGU, San Francisco, CA, 9-13 Dec.

Mapping Urban India: Comprehensive Land Use/Land Cover Classification of Urban Areas Using Public Imagery and Machine Learning

Published in AGU Fall Meeting, 2019

An ever-larger share of humanity lives in urban areas, a trend expected to continue in the coming decades. Whether considering the economic, demographic, environmental, or societal dimensions of human activity and impact, the ways in which cities change in order to accommodate swelling urban populations—or fail to do so—will have outsize significance to human well-being as well as local and global environmental outcomes. Credible land use / land cover (LULC) maps are a vital tool for monitoring and measuring these changes, but the capacity to produce this information is often limited, especially in some of the urban areas expected to undergo the most disruption and growth. Automated mapping, driven by satellite imagery and machine learning, may offer a solution.
We present a map classifying LULC across all of urban India at 5-meter resolution. Using only publicly available inputs—satellite imagery from the Sentinel-2 constellation and manually coded ground-truth data from the Atlas of Urban Expansion—we constructed training samples representing fourteen Indian cities for supervised machine learning. By feeding these samples into a convolutional neural network, we trained a single model capable of classifying LULC across a range of environments and urban morphologies. The trained model was then applied to satellite imagery to make comprehensive predictions for all urban areas in India, as identified by the Global Human Settlement Layer. We quantified model performance by comparing predictions to reserved, validation ground-truth from the Atlas of Urban Expansion, where available. All data processing and storage as well as model creation, training, and application was executed in the commercial cloud within a highly scalable architecture. This permits continual map generation as new imagery is collected, allowing for low-cost monitoring of urban change, in near real-time and across entire countries. The methods, architecture, and training data can be quickly and straightforwardly transferred to alternative geographies and imagery sources, and can be applied to different time periods, allowing historical change detection.

Recommended citation: Kerins, P., Guzder-Williams, B.P., Rashid, T., Mackres, E., Pietraszkiewicz, E. (2019), Mapping Urban India: Comprehensive Land Use/Land Cover Classification of Urban Areas Using Public Imagery and Machine Learning, Abstract [IN42A-07] presented at 2019 Fall Meeting, AGU, San Francisco, CA, 9-13 Dec.

Spatial Characterization of Urban Land Use through Machine Learning

Published in World Resources Institute, 2020

This technical note describes the data sources and methodology underpinning a computer system for the automated generation of land use/land cover (LULC) maps of urban areas. Deploying a rich taxonomy to distinguish between different types of LULC within a built-up area, rather than merely distinguishing between artificial and natural land cover, enables a huge variety of potential applications for policy, planning, and research. Applying supervised machine learning techniques to satellite imagery yielded trained algorithms that can characterize LULC over a large spatial and temporal range, while avoiding many of the onerous constraints and expenses of the historical LULC mapping process: manual identification and classification of features. This note presents the construction and results of one such set of algorithms—city-specific convolutional neural networks—used to establish the technical viability of such an approach.
Full report here.

Recommended citation: Kerins, P., E. Nilson, E. Mackres, T. Rashid, B. Guzder-Williams, and S. Brumby. 2020. “Spatial Characterization of Urban Land Use through Machine Learning.” Technical Note. Washington, DC: World Resources Institute.

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.