blog authors
past blog entries

Welcome to the Kellylab blog

geospatial matters

Please read the UC Berkeley Computer Use Policy. Only members can post comments on this blog.

Entries in lidar (35)

Wednesday
Jun132012

SNAMP lidar team featured in ANR's Green Blog

The SNAMP spatial team and the cool lidar work we are doing was recently featured in ANR's Green Blog. The article highlights the work of UC Merced in forest visualization. Currently, most visualization software packages focus on one forest stand at a time (hundreds of acres), but now we can visualize an entire forest, from ridge top to ridge top. The Sierra Nevada Adaptive Management Project (SNAMP) Spatial Team principle investigators Qinghua Guo and Maggi Kelly, and graduate student Jacob Flanagan and undergraduate research assistant Lawrence Lam have created cutting-edge software that allows us to visualize the entire firescape (thousands of acres).

Wednesday
Apr182012

New high resolution coastal elevation data for California

The California Ocean Protection Council has released state-wide high resolution elevation data for coastal California and much of San Francisco Bay. LiDAR data were collected between 2009-2011 and cover nearly 3,800 square miles. Data can be download from NOAA Coastal Services Center's Digital Coast website:
Tuesday
Apr032012

Lidar + OPALs geolunch and workshop next week!

Full waveform lidarOur colleague Bernhard Hofle from the University of Heidelberg will be here next week as part of an international exchange project: Airborne Laser Scanning for 3D Vegetation Characterization: Set-up of an International Signature Database. Bernhard is interested in Open Source GI and Spatial Database Management Systems, Object-based image and point cloud analysis, radiometric calibration of full-waveform airborne LiDAR data, and other topics.

Bernhard is part of a group that now has one of the first Terrestrial Laser Scanning (TLS) systems worldwide with full-waveform recording capability (upgraded Riegl VZ-400). Deeper understanding and substantially improved analysis of the laser shot backscatter of natural objects by having direct access to full-waveform signatures and physical observables are expected. The unique system will be applied in new research projects dealing with the extraction of 3D geoinformation in e.g. precision farming, geoarchaelogy, geomorphology and forestry. Furthermore, an extensive web-based database of reference signatures for known objects will be developed based on calibrated waveform features derived by TLS.

He is a leader in analysis of discrete and waveform lidar data in urban and forest applications and one of the developers of the cool OPALS lidar software. He'll be giving a geolunch and a workshop afterwards on the software. The geolunch is 1-2, then we will stick around and learn about OPALS.

Saturday
Mar242012

ASPRS 2012 Wrap-up

ASPRS 2012, held in Sacramento California, had about 1,100 participants. I am back to being bullish about our organization, as I now recognize that ASPRS is the only place in geospatial sciences where members of government, industry, and academia can meet, discuss, and network in a meaningful way. I saw a number of great talks, met with some energetic and informative industry reps, and got to catch up with old friends. Some highlights: Wednesday's Keynote speaker was David Thau from Google Earth Engine whose talk "Terapixels for Everyone" was designed to showcase the ways in which the public's awareness of imagery, and their ability to interact with geospatial data, are increasing. He calls this phenomena (and GEE plays a big role here): "geo-literacy for all", and discussed new technologies for data/imagery acquisition, processing, and dissemination to a broad public(s) that can include policy makers, land managers, and scientists. USGS's Ken Hudnut was Thursday's Keynote, and he had a sobering message about California earthquakes, and the need (and use) of geospatial intelligence in disaster preparedness.

Berkeley was well represented: Kevin and Brian from the GIF gave a great workshop on open source web, Kevin presented new developments in cal-adapt, Lisa and Iryna presented chapters from their respective dissertations, both relating to wetlands, and our SNAMP lidar session with Sam, Marek, and Feng (with Wenkai and Jacob from UCMerced) was just great!

So, what is in the future for remote sensing/geospatial analysis as told at ASPRS 2012? Here are some highlights:

  • Cloud computing, massive datasets, data/imagery fusion are everywhere, but principles in basic photogrammetry should still comes into play;
  • We saw neat examples of scientific visualization, including smooth rendering across scales, fast transformations, and immersive web;
  • Evolving, scaleable algorithms for regional or global classification and/or change detection; for real-time results rendering with interactive (on-the-fly) algorithm parameter adjustment; and often involving open source, machine learning;
  • Geospatial data and analysis are heavily, but inconsistently, deployed throughout the US for disaster response;
  • Landsat 8 goes up in January (party anyone?) and USGS/NASA are looking for other novel parterships to extend the Landsat lifespan beyond that;
  • Lidar is still big: with new deployable and cheaper sensors like FLASH lidar on the one hand, and increasing point density on the other;
  • Obia, obia, obia! We organized a nice series of obia talks, and saw some great presentations on accuracy, lidar+optical fusion, object movements; but thorny issues about segmentation accuracy and object ontology remain; 
  • Public interaction with imagery and data are critical. The Public can be a broader scientific community, or a an informed and engaged community who can presumably use these types of data to support public policy engagement, disaster preparedness and response.
Friday
Dec162011

Cool lidar video from SNAMP

Thanks Marek! Flying into our northern SNAMP field site via landsat to lidar.

Happy New Year everyone.

Wednesday
Jun222011

New York City Solar Map Released

An interactive web-based map called The New York City Solar Map was recently released by the New York City Solar America City Partnership, led by Sustainable CUNY. The map allows users to search by neighborhood and address or interactively explore the map to zoom and click on a building or draw a polygon to calculate a number metrics related to building roof tops and potential solar power capacity including: potential energy savings, kilowatt output (in a time series), carbon emission reductions, payback, and a calculator for examining different solar installation options and savings with your utility provider. The map is intended to encourage solar panel installations and make information regarding solar panel capacity easier to access. Lidar data covering the entire city was collected last year and was used to compute the metrics used to determine solar panel capacity.

Solar Energy CalculatorThe data reveals that New York City has the potential to generate up to 5,847 megawatts of solar power. The installed solar capacity in the US today is only 2,300 megawatts. 66.4 percent of the city’s buildings have roof space suitable for solar panels. If panels were installed on those roof tops 49.7 percent of the current estimated daytime peak demand and about 14 percent of the city’s total annual electricity use could be met.

This map showcases the utility and power of webGIS and how it can be used to disseminate complex geographic information to anyone with a browser, putting the information needed to jump start solar panel installation in the hands of the city’s residents. The map was created by the Center for Advanced Research of Spatial Information (CARSI) at CUNY’s Hunter College and funded primarily by a United States Department of Energy grant.

Source: Click here for a NYTimes Article on the project for more information.

Click here to view the New York City Solar Map.

New York City Solar Map

Wednesday
Apr202011

New BAAMA Journal Published

Volume 5, Issue 1 - Spring 2011

BAAMA is pleased to announce The BAAMA Journal has been published in conjunction with Earth Day.  Special thanks to all our contributing authors and editors.  The BAAMA Journal is a publication that highlights Bay Area people and projects that use geospatial technologies.

IN THIS ISSUE:

  • Building Virtual San Francisco: Growing Up With GIS
  • DPW Uses LiDAR and a Custom Algorithm for Delineating Drainage Catchments and Hydrologic Modeling
  • Preparing Historical Aerial Imagery of Southern California Deserts for use in LADWP's GIS
  • Where in the Bay Area

 

Monday
Apr112011

New SNAMP spatial newsletter on lidar posted

This is an exerpt from our recent SNAMP newsletter on our lidar work, written by me, Sam, and Qinghua.

We are using Lidar data to map forests before and after vegetation treatments and measuring forest habitat characteristics across our treatment and control sites. These data will give us detailed information about how forest habitat was affected by fuel management treatments.

Visualizing the forest
The image at left is not a photograph: it is a computer generated image of our SNAMP study area, using only Lidar data. These kinds of visualizations are commonly used in the forestry field for stand and landscape management, and to predict environments into the future.  But visualization software packages usually only focus on one stand at a time. Our method allows us to visualize the whole firescape.  This is useful for understanding the complexity in forest structure across the landscape, how the forest recovers from treatments, and how animals with large home ranges might use the forest.  The UC Merced team created this cutting-edge product.

Finding the trees in the forest
In order to see the trees in the forest, the UC Merced spatial team researchers developed a method to segment individual trees from the Lidar point cloud. The method identifies and classifies trees individually and sequentially from the tallest tree to the shortest tree. We tested this method on our SNAMP Lidar data. These forests are complex mixed coniferous forests on rugged terrain, and yet our method is very accurate at defining individual tree shapes. We are applying the method in both of the SNAMP study areas.

Mapping downed Logs with lidar data
The UC Berkeley spatial team researchers used some new techniques that help distinguish individual features, and mapped the logs, as well as some of the trees in this stand. In the figure at left: red colors are logs, green colors are trees.

More information on these and other projects can be found on the SNAMP website.

Wednesday
Mar302011

SNAMP spatial newsletter on lidar

This is an exerpt from an older SNAMP newsletter Marek and I wrote describing the use of lidar in our Sierra Nevada Adaptive Management Project. Originally published November 2008.

Environmental sciences are inherently spatial, and geospatial tools such as Geographical Information Systems (GIS), Global Positioning Systems (GPS) and remote sensing are fundamental to these research enterprises.  Remote sensing has been used for forest and habitat mapping for a long time, and new technological developments such as LIDAR (light detection and ranging) are making this field even more exciting.  Here we briefly describe LIDAR’s basic principles and show some preliminary analyses completed for the SNAMP Project. We are using this data to model detailed topography to help the water team understand runoff in the SNAMP watersheds, to map forest canopy cover and vegetation height as inputs to the fire and forest health team’s detailed fire models, and to derive important forest habitat characteristics for the spotted owl and fisher teams.

We contracted with the National Center for Airborne LIDAR Mapping (NCALM) for our data.  They flew the GEMINI instrument at approximately 600 m above ground level, with 67% swath overlap. The instrument collected 4 discrete returns per pulse at 125kHz, and the data has a final
density of 9 points per m2.

Raw Data: LIDAR data is typically delivered as a “point cloud,” a collection of elevations and their intensities that can be projected in a three-dimensional space. In Figure 2 (right) we show this “point cloud” concept. There are thousands of individual points in the image, each colored according to its height (magenta and red are high, orange and yellow are low). 

Bare Earth: Once the data is collected, the first step is to transform the data into a “bare earth” model; which is an approximation of the ground if all objects above surface are removed.  We use the “Last Return” data (see Figure 1 above) to generate this model of the bare earth.  These are typically very detailed products (with a small footprint on the ground) and provide much more topographic information than from Digital Elevation Models (DEMs) that were derived from topographic maps.  Our DEM has a ground resolution of under 1m.

Forest Structure: Another typical step in processing LIDAR data is to examine individual trees and forest structure.  An example of a forest stand is shown in Figure 4.  These and other products help us understand how the forest influences surface hydrology, how a patch of forest might provide habitat for a fisher and how a forest might burn given certain weather and wind patterns.  

Future Analyses: We are in the process of linking the forest parameters gathered by the Fire & Forest Ecosystem Health Team in summer 2008 with the LIDAR-derived data to help scale-up forest variables to the fireshed scale.  For example, tree height, tree DBH (diameter-at- breast-height) and canopy cover have been successfully modeled using LIDAR data in other studies, and there is active research linking field-based and LIDAR-based fire-related measures such as canopy base height and ladder fuels, and wildlife-related measures such as vertical structure. 

Friday
Feb182011

Very high res urban mapping: research reported at the Berkeley BEARS 2011 EECS Annual Research Symposium

February 17th Berkeley Electrical Engineering and Computer Sciences (EECS) Annual Research Symposium some interesting new developments and research in the world of information technology were showcased. The first section of the symposium hosted four talks about current and future research at UC Berkeley in IT focusing on large scale data mining, aggregation and analysis; artificial intelligence and language processing; augmenting reality with virtual and mobile systems of information display and collection; and sensor/communication nanotechnology.

Most notable in application to GIS, although far off, it was mentioned that work is underway to try to miniaturize the lasers in LiDAR sensors to the scale of inserting them into mobile phones to enable collective 3D ground mapping of urban areas from mobile users and for placement in building materials to monitor building occupants/conditions. More current was the talk from Avideh Zakhor about work in the VIP lab on combining data from mobile ground based sensors, similar to those used to create Google street view, with aerial photos to create 3D urban models at varying resolutions (Read more) (Video). Also in development is putting the same technology that is used to create Google street view of exterior streets in the interiors of buildings. This enables the creation of 3D interior building models and photorealistic walk through environments of interior spaces. This may have many implications in emergency  preparedness/management, design, and marketing (Read more) (Example image below).

Source: Image from Avideh Zakhor homepage: http://www-video.eecs.berkeley.edu/~avz/

Check out the BEARS 2011 website here for more information and for video recordings of the talks to be up soon.

For more information on the individual presenters: Ion Stoica, Dan Klein, Avideh Zakhor, Kristofer Pister, and Jan Rabaey.