Digital Agriculture Archives - Farm Foundation https://www.farmfoundation.org/category/digital-agriculture/ Home website for Farm Foundation Tue, 28 Nov 2023 18:53:21 +0000 en-US hourly 1 Post-Event Report: Building Beta Data Management Protocols for Soil Carbon GHG Quantification https://www.farmfoundation.org/2023/11/28/building-beta-data-management-protocols-for-soil-carbon-ghg-quantification/ Tue, 28 Nov 2023 16:39:52 +0000 https://www.farmfoundation.org/?p=11856 To assist USDA in initial designs for their “Greenhouse Gas (GHG) Quantification Program,” Purdue Open Ag Technology and Systems Center (OATS),...

The post Post-Event Report: Building Beta Data Management Protocols for Soil Carbon GHG Quantification appeared first on Farm Foundation.

]]>

To assist USDA in initial designs for their “Greenhouse Gas (GHG) Quantification Program,” Purdue Open Ag Technology and Systems Center (OATS), Semios, The Mixing Bowl, and Farm Foundation hosted the Building Beta Data Management Protocols for Soil Carbon GHG Quantification virtual “event storming” which took place on October 24, 2023.

Modeling and measuring soil carbon content is a soup of complexity.  Even deciding what to measure and how to measure it is difficult enough. Add to that the sometimes difficult to quantify variations in sampling protocols, labs, model needs, the meaning of terms, and the humans involved. And add to that the social and coordination complexities of differing goals across necessary stakeholders such as landowners, government, and industry, interoperability across many organizations for both lab results and records of farming practices, and the sheer spatial scale of estimating soil content across the entire country.

If that wasn’t enough, the models of the relationship between farming practices and changes in soil carbon content are themselves under active research and development which will likely lead to innovations that may constantly move the goalposts for data collection. Much like actual soup, this “complexity soup” must be eaten one spoonful at a time, and more spoons working together will get to the bottom of the bowl faster.  Success will require both generalized coordination and cooperation that builds a solid community of contributors as well as specialized early pipelines that get the first data flowing quickly as a basis to start iterating improvements.

What follows in this document is:

On Tuesday October 24, 2023, a diverse group of government, industry, and academic pioneers gathered for an “event storming” session to gain a shared understanding of the issues involved and solutions for “beta data management” systems involved with measuring soil carbon content and modeling the relationship between farming practices and change in soil carbon content, with a special focus on the United States Department of Agriculture (USDA) initiative to perform country-wide soil sampling and data collection which collects existing data, creates new data, and connects data with carbon models and researchers.  Over the course of just a few short hours, the group had collectively produced an online whiteboard with over 2,000 pieces of information toward this goal.

The goal of this event was to form a community-led understanding of data management surrounding the measurement of soil carbon content and to identify common pain points to be addressed in follow-up events and initiatives to help solve them. Participants jointly identified actions, actors, and data exchanges that occur, or need to occur — through a method known as “event storming.” The resulting Miro board with the virtual sticky-note-based results can be found here.

Beta Data Management Event Storming Miro Board

Traditional “event storming” tries to understand the big picture of a process by focusing on the “events” that take place and their approximate order in time. In a pre-pandemic world, this meant writing past-tense verbs (“events”) on orange sticky notes and sticking them on a wall as a big, in-person group of domain experts. In an attempt to approximate the rich conversational environment of in-person sessions, the supply chain was segmented into four separate but overlapping “sector timelines”: groups of sticky notes designed around optimizing a particular group of activities or goals. The larger group of event stormers was broken into four groups, each rotating through those segments while placing stickies and sharing their experiences and insights with the group. The four sector timelines were: Field Data Collection, Soil Lab, Feeding the Models, and Metadata Management.

In addition to events (represented with orange stickies), purple stickies represent “hot spots”: disagreements, ambiguities, etc. White stickies indicate “terms” that arise which comprise part of the domain language.

Finally, once the basic event flow has been established, blue stickies are added to represent decisions, questions, or “triggers” that are relevant to the people involved with the events in the timeline. The people themselves are shown on yellow stickies, and any data or information needed to make those decisions, answer those questions, or pull those triggers was added on green sticky notes.

The resulting “big picture” timelines give insight into what happens, who is making decisions, what questions need to be answered, and what data is needed at which points in time.

The virtual small groups succeeded in triggering some robust conversations, and the group as a whole produced rich event maps. We had a large and diverse group of participants, including farmers, agricultural service providers, ag tech companies, USDA personnel, academics, and researchers.  The broad expertise and differing windows through which participants viewed various parts of the problem resulted in a lot of knowledge sharing and great progress was made toward the goals of developing a shared understanding of the concepts and issues facing beta data management.  Of particular note was the unexpected poetic realization that rutabagas, despite encompassing a negligible amount of land use, have a particularly lyrical quality of “beta rutabaga data.”

Moderator: Prof. Ankita Raturi, Agricultural Informatics Lab & Open Ag Tech and Systems Center at Purdue University

When discussing field data collection in the context of soils in agricultural landscapes, the first image that comes to mind is a clod of rich, dark, fresh earth. We think of a laptop computer perched precariously on the edge of a truck, a trail of tests and emails, and the eventual chaos of wrangling a suite of datasheets from clipboards and electronic documents to start the journey. However, the lifecycle of a soil sample begins months, sometimes years, before it is collected, with discussions among farmers and other land stewards, researchers and technical assistance providers, extension agents and crop advisors as they establish a shared project goal and subsequently coordinate and collaborate an intricate sequence of field data collection events. This breakout session mapped three event clusters where these soil data stakeholders collaborate to determine: (1) how a soil field data collection plan is developed, (2) how sampling logistics are coordinated and executed, and (3) how the soils data are verified, validated, and ultimately prepared for downstream analysis and use.

Some land stewards sample soils on their fields for their own understanding, using ad-hoc or on-demand data collection methods that likely have simpler logistics as they work independently to quickly ascertain some basic conditions of their field: pH, soil moisture, or otherwise, to inform an in-situ or in-season decision. However, land stewards increasingly participate in larger projects, where soils data are collected across landscapes as part of a goal-driven sampling project: to quantify soil carbon and GHG, to understand the movement of nutrients, and so on. In this scenario, the field data collection logistics are more complex. Individual land stewards must collect data using near-identical protocols, measure the same soil attributes, and even use the same labs for testing (as discussed in a separate session during this event). This level of coordination and collaboration inherently demands a more formalized methodology that is typically created through a rigorous planning process among soil data stakeholders.

In this session, participants converged on mapping this latter form of goal-driven and coordinated soil data collection, describing a semi-formal process where groups of data collectors are oriented toward common scientific questions. Individual data collectors, such as the land stewards themselves, will often have complementary site-specific goals related to soil health and land management. In this sense, the purpose of soil data collection is both to enhance scientific understanding on soil carbon and GHGs, as well as to improve management practices to support sustainability outcomes. 

Once a group of stakeholders are identified (e.g., land stewards, technical assistance providers, government regulators, scientists), a project manager will meet with the group to be able to establish specific, scientifically-feasible, and practical project goals. Participants noted the importance of thinking on protocol as an artifact with many parts: sampling methods, stratification strategies, data structures and vocabularies. These different components are informed by, of importance to, have impact on, and subject to constraints by, different stakeholders, which means that there is need to ensure clear communication, transparency, clarity, and consensus among the project stakeholders.

Next, the project stakeholders need to determine who they need to consult to inform the protocol, as well as identify which data are needed to develop the protocol. In a listing exercise, participants named a non-trivial initial set of data they think are necessary for protocol design: field boundaries and field-level data; tract, PLU ecological site ID and other land identifiers; soil conditions including composition, texture and suborder; historical cropping history including crop rotations, planting and harvest dates; management practices including input application and tillage history (with amount, data, and type for each); existing and planned conservation practices; land use, vegetation, natural resources, drainage, hydrological groups and other environmental conditions; and probably more.

This significant baseline data creates a slew of complications. First, participants noted that there is a desire to identify what a “minimal” dataset constitutes and to come to a consensus across different soil data collection efforts on the necessity and utility of different data types. Digital field-level data, with semantic integrity is also hard to obtain. Each of the different types of data is collected for multiple, sometimes conflicting purposes: for crop management, for regulatory reporting, for ad-hoc understanding, and sometimes, the data simply does not exist. Data quality, format, structure, and reusability are also subject to the norms and limitations imposed by the data collection tools used. This varying level of data availability, granularity, and standardization means that the seemingly innocuous task of collecting background data to inform a soil data collection plan can devolve into a behemoth data science project in itself.

Once the required background data are collected and prepared, they are used by project stakeholders to create an SOP or protocol. Three questions drive this event. What data needs to be collected and through which sampling methods? What is the stratification goal and strategy? Who will collect the data? Participants noted the need to strive for simplicity in the sampling and data collection process to improve data quality, consistency, and practicality.

Participants dug into the process for identifying a stratification methodology, as it is a critical component of a protocol design. At a minimum, a set of field boundaries must be shared for stratification. These boundaries, in conjunction with any other field-level data available, are used to create a soil sampling plan. The field maps may be compared with the background data to establish areas of variability and priority. The stakeholders must make many highly granular decisions, for instance: which soil characteristics are of key interest to measure; sampling hardware and how the act of sampling needs to occur (e.g., sampling depth, density, number); which labs to use for subsequent soil testing; and how to accommodate site-specific constraints (e.g., physiographic limitations). Once these decisions are agreed upon, stakeholders prepare appropriate data layers, sometimes conducting a mini-quality assurance process to determine data suitability and validity. Choice of stratification algorithm then depends on further decision points such as composite vs single core sampling requirements; project-level vs field-level stratification, and the feasibility of the proposed sampling effort.

Once a stratification and sample plan is created, several forms of stakeholder consultations must occur. Land stewards and field techs are consulted to understand site-specific constraints and, in an ideal situation, the SOP is customized for different sites through a common approach. The SOP documents are ideally shared with the scientific community for review if there are radically novel approaches utilized that have yet to be scientifically validated, as is often the case given the emerging soil science and related research. Participants noted that project stakeholders would ideally also consult with registries, particularly if the land stewards involved are interested in carbon credit and other ecosystem service marketplace programs. Similarly, consultation with agencies like NRCS involved in conversation planning to ease the downstream burden of cost-share program or regulatory reporting. Participants noted that while we often treat the SOP as though it is set in stone, it is typically modified once the sampling venture commences as things change. As SOP review with different stakeholders comes to a close, the level of granularity in an SOP may be increased further to provide step-by-step guidance for in-field staff and technicians who may actually conduct the soil sampling. The project stakeholders both create training materials and ideally conduct in-field training with soil samplers on how to enhance the SOP.

Given the gargantuan effort involved in protocol design, many questions remain. Does planning always translate to action? What is the minimal effort to create a scientifically valid field data collection SOP to understand soil health, quantify soil carbon and GHGs, and ultimately support improved land management actions.

If a detailed, coherent and practical field data collection protocol is designed, then the actual in-season logistics of coordinating sampling dates with land stewards, and the actual act of sampling itself, are more straightforward. Sampling is scheduled with the land steward, or if their own team is responsible for the sampling, they coordinate amongst themselves. A trained field technician arrives at the site, plans their route among all the sample locations, and then simply samples according to plan at the predefined locations. Ideally, there is guidance in the SOP on how to adjust if there is something unexpected at the location itself (e.g., wet, too steep, there’s a tree!). The SOP should also contain guidance on how to select subsamples within the field.

The biggest pain point during the field data collection itself is ensuring that samplers collect the necessary metadata to ensure that the samples are not just meaningless, uncontextualized lumps of soil. Field-level data is ideally collected first, including land management with this-season conditions (e.g., harvested/current/intended crop). Samples, including subsamples, are ideally located to some level of positional accuracy and include, for instance, an interpretable sample ID, GPS data, collection date, sampling depth, soil conditions, type of sample (e.g., bulk or carbon), implement used (e.g. shovel or probe). The sample must also be bagged, tagged, and geolocated according to the lab where it will be sent. Participants noted the need for some level of quality control to be conducted on the sample data before leaving the field, including annotations of possible errors or changes.

Once the samples are collected, those that will be subject to lab testing require work orders to be verified and shipped with the samples to the lab. At this point, there is a suite of soil data events and challenges that occur on the lab-side, as discussed in the “Soil Lab” breakout group.

A common aphorism in statistics is that “all models are wrong, but some are useful. Participants noted, that similarly, all data are wrong, and the challenge lies in determining how to quantify how “wrong” the data is. That is, when we collect any data, there is some degree of error or uncertainty. Participants noted the importance of verification of soil data samples: for instance, bulk densities are very error prone in practice depending on how the sample was collected. While not all uncertainty can be accounted for, it is important for documentation and metadata to reflect how and where these issues may crop up. Data are also subject to bias (e.g., in where it was sampled); thus, it is important to ensure that there are some mechanisms to document assumptions and constraints in Quality Assurance and Quality Control (QA/QC) plans. There is also a need for methods to reduce errors as data and samples are passed from sampler to shipper to lab tests and back. Solutions may include simply tagging samples themselves with supplementary key identifiers in case of mixups. Sample documentation could also be linked to the lab results and site data.

Where data verification deals with accuracy and quantifying uncertainty, data validation involves ensuring that the data reflects the initial protocol adequately. A simple and common data validation example involves checking to see that the number of samples actually collected match the prescribed number of samples, and if not, why. However, participants noted the need for validation of the process itself! This could be in the form of community review of processes or some form of external review from analogous data stakeholders.

Once data is verified and validated, the issue of how to communicate that a sample is “certified” arises. Particularly as the data assembly process begins and different types of data with varying levels of accuracy, uncertainty, and quality are interconnected, there is a need to ensure data traceability back to the point of origin. As a field data collection wraps up, stakeholders may consolidate data into a master datasheet, upload it to a shared database, or simply file it away in a physical or digital folder. However, the reality of field data assembly also includes a mess of data spread across email inboxes, various record management systems, individual phones and computers, and many other formal and informal locations. Participants noted that many of the challenges in data assembly include the need for improved interoperability, whether through the adoption of compatible semantic standards or tools for data conversions. As discussions regarding data sharing begin among data stakeholders, issues around privacy, trust, governance, consent management, and concerns regarding FAIR data must be considered. The downstream complexities involved in using soils data and handling metadata were further discussed in the “Feeding the Models” and “Metadata Management” groups, respectively.

Moderator: Aaron Ault, Open Ag Tech and Systems Center at Purdue University

A natural assumption in the soil lab segment was that it starts after the samples have been collected in the field.  However, the “soil lab chosen” event made it clear that several parts of the timeline happen prior to digging in the dirt.  The timeline sorted itself into roughly three categories: prior to the soil sample arriving at the lab, between arrival and testing, and reporting of results.

Some information from the “prior to lab” stage needs to “leapfrog” the lab: i.e., the lab doesn’t care about them and doesn’t want the privacy implications of holding them, but the eventual models need such information.  Examples include the sampling locations (ideally GPS), environmental conditions during sampling, sampling protocols (including how cores were taken or combined), farmer practices (cover cropping, etc.), and potentially farmer sentiments on the location via questionnaire. Clearly, samples need identifiers that pass to the lab and then through the lab to line up results with this leapfrog data later.  This sort of data transfer generally happens already, either a priori in anticipation of a box of samples arriving at the lab or directly in the box itself.  This has a serious design implication in that if labs are chosen later to report results directly to USDA, the lab may in fact need to collect and relay information that they would rather not mess with.  It also means that the creator of the initial ID for a sample should be well-defined in the overall process.

Several participants voiced concerns about the consistency of lab results in practice. Some of this may be due to inconsistencies in sampling methodology, including efforts prior to collection. Adoption of scientifically grounded standard operating procedures, such as those developed by the Soil Health Institute, is recommended to improve consistency and reliability of collected data. Labs generally adhere to national Quality Assurance/Quality Control (QAQC) protocols administered via several potential agencies. However, this sort of QAQC happens prior to sampling and so is more generalized than the quality of results for a single box of samples. It therefore does not seem to guarantee the absence of testing anomalies on any given day. To combat this, the preferred method voiced among participants was to include duplicate soil samples (i.e. split the same cores into multiple bags) in each set of samples. This has the disadvantage of added cost, but the advantage of minimal coordination: only the person sending the samples needs to participate in the protocol in order to know what level of trust the resulting data should have.

Variations in lab logistics can cause some headaches for a national-scale program: some labs have specific Standard Operating Procedures for samples, such as the minimum amount of soil and type of bag used, which can provide practical barriers to nationwide consistent protocols. In addition, labs themselves may support different test assays with different methods. The MODUS standard hosted by Ag Gateway provides an excellent suite of test codes for clear, specific test assays and this data should certainly travel along with the lab results themselves.  This only identifies the test performed; however, an additional layer of requirement likely needs to specify which type of tests are allowed in order to participate.  Clearly, a lab certification component will be necessary for a national scale sampling program.

Once the box of samples arrives at the lab, the lab will generally scan QR codes on bags or otherwise record the identifiers for samples. Each sample’s requested assays can be included in the box or transmitted ahead of time and associated with the sample ID. It is unclear what level of data about samples in transit will be relevant, but some questions do exist, such as time spent in transit, temperature in transit, time in storage before testing at the lab, etc. 

The lab may homogenize a sample during testing, and the method used and resulting soil properties (texture, particle size, moisture content, etc.) may be important to record.

Some kinds of assays are more advanced and uncommon than others.  A baseline suite of test assays should be defined for generally available, inexpensive tests, and value-add suites of tests can also be performed in certain cases as warranted. The phrase “super sites” was used to describe soil sampling sites which may consistently order more advanced tests. From a modeler’s perspective, this is important because it means there will be a large dataset with simpler data, and a smaller dataset with more detailed and advanced data.  It is also possible that new methods for testing may become relevant throughout the program’s life cycle, so a framework for adapting to changing test schemes should be developed, ideally with an eye to understanding the comparability of old procedures to new ones in order to maximize the size of usable datasets.

In general, labs report results to the person who sent the sample (and paid for it).  Reporting in the MODUS standard should be required for any national scale program as it will ensure interoperability among data and a clear means of identification of lab test procedures. Existing reporting tends to be missing critical data about the actual tests that were performed to get the results, and they are generally haphazardly organized in non-standard CSV’s or spreadsheets.  Work is ongoing on an open source tool and code libraries (https://oats-center.github.io/modus to make it easier to transition between spreadsheet and the MODUS standard.  Should a MODUS-based interoperability come to fruition in the soil health industry, it will finally enable the community to build tools and services that can deliver value to large swaths of stakeholders, including farmers, landowners, researchers, etc.

There is as yet an unclear path from lab to model researcher.  Some participants may be interested only in feeding their data into an existing model for a carbon certification, while others may be more interested in enabling research into the models themselves.  Modelers would prefer a reasonably centralized repository of hosted data with clear privacy and use restrictions.  This introduces a single point of failure for any participants involved in the sample collection side, and could be a very onerous privacy burden on the host of the data.  Centralization is also often inherently less secure since it allows malicious actors to focus on a single platform.  It may be that a hybrid approach leveraging interoperability among a network of data sites could provide a reasonable best of both worlds.

To the extent that the program wants to assemble data beyond prescriptive, paid-for direct sampling, some consideration should be paid to how to encourage participation. The set of potential soil testing labs is a much smaller target than the full set of all landowners and farmers, and therefore may provide a better avenue to adoption. A lab could be set up to provide a copy of results directly into a data collection platform as an add-on feature for a sample, thereby enabling them to also provide awareness to their customers of the program. However, this will still need to solve the problem of the “leapfrog” data which the lab does not historically collect. 

If the person who sent the sample to the lab intends for the sample to go directly into a model such as COMET, there may also be an opportunity to streamline that process as well.

Moderator: Rob Trice, founding partner of The Mixing Bowl & Better Food Ventures

The primary purpose of the USDA model is to improve our understanding of management practice efficacy to sequester GHG by providing estimates as accurately as possible. The two most important things about the model are that 1) it possesses the ability to ingest comprehensible information through data structures and ontologies it can understand, and 2) it possesses the ability to be updated so that over time it can be even more accurate by collecting and connecting with other, new data.

We know that, through research and data analysis, we will learn more about things like farm-level practice implementation. I.e., what will the impact of management practices–like cover crop application– be on soil carbon sequestration? We also know that additional soil analysis data will be added to the model in the future.

We also know that USDA will start building its model based on COMET and DayCent. COMET is a greenhouse gas accounting tool that is used to estimate greenhouse gas emissions and carbon sequestration from agricultural production. DayCent is a biogeochemical process model that is used to simulate soil carbon and nitrogen dynamics, as well as greenhouse gas emissions. COMET uses DayCent as its underlying model to simulate entity-scale greenhouse gas emissions. This means that COMET relies on DayCent to provide estimates of greenhouse gas emissions from various agricultural activities, such as crop production, livestock production, and manure management. COMET also provides users with the ability to input their own farm-specific data, such as crop types, management practices, and soil conditions. This allows COMET to generate more accurate estimates of greenhouse gas emissions for individual farms.

The USDA model will leverage COMET and DayCent and will be able to ingest new data from the field, from labs, from scientific researchers, and from others’ models and databases.

To be effective, both the model, and those contributing data to it, need to have a common set of terms and data fields to feed the model. For instance, looking at the example of cover crops, we need common terms to describe what was planted (legume or non-legume?), when, where, and, additionally, when the cover crop was terminated.

Exact data structures and semantics need to be communicated to those capturing data to be used in the model.

There is a natural tension that exists between making the model functional through ease of data collection by focusing on only a minimal set of “required” data versus also capturing potentially important (“desired”) information for the future. The qualified data collectors who are gathering field or lab data need to know clearly what to capture.

As an example, in addition to soil organic matter or soil organic carbon levels, perhaps we may want to collect information for future modeling related to soil mineral, microbial, DNA, enzyme levels enabled by technologies like portable FT-IR spectroscopy

In addition to data structures and semantics, we recognized that there is important data that must be collected about the data inputted into the model. For instance, we need to define scales for data capture (meters or inches?), location, time, (tool? method?) of data captured.

Regarding model input, we need to identify a QA/QC process and some methodology needs to be determined to handle “missing data” from imputation mechanisms.

Regarding the output from the model, we need to make sure users know what version of the model was used to determine an output. 

While there is another USDA working group– the Model & Tools Group– that will be responsible for measuring the efficacy of the model, determining measurement methods and metrics to assess the performance of the model, we recognize that we need a way for “the model to feed the model.” The model is intended to be dynamic (not static) and improvements to it need to be identified and rolled into the model somehow.

A separate Miro “room” looked at meta data for the overall modeling initiative and we want to make sure that group considers security of the ML models and identification of adversarial AI that could occur through different model harmonization.

We are aware of other international and proprietary efforts to quantify agriculture GHG and we need a way to harmonize or interoperate with those other models to the degree possible. AgMip, an organization established in 2010 with the sole purpose of making agriculture models interoperable and intercomparable, might be someone to partner with in this regard.

The figure below very accurately represents many of the data input and management challenges we identified.

A last important point not to be overlooked is that USDA needs to build a community of model users. Three kind of users need to be accounted for:

Data Collectors & Inputters need to be communicated with so they clearly understand what data to collect and how to properly input data. This would not only include qualified data collectors but soil lab technicians so they understand what and how to analyze data.

Other Modelers need to be communicated with so they can help to harmonize and interoperate models so we get the benefit of more data.

End Users who will use the output from the model will really need to be identified and included in the development of the model. We identified the following end users:

  • Other modelers & scientific researchers who will want to leverage the model’s data for their own modeling.
  • Certifiers & reporters will want to use the model for purposes ranging from carbon registries to the EPA reporting on US carbon levels to the UN.
  • Technical advisors for farms and ranches (like NRCS) who are looking to promote optimal climate-smart agriculture practice implementation that is crop and locale-specific based on the latest science and models.

Moderator: Drew Zabrocki, co-founder of Centricity by Semios, Advisor, OATS Center at Purdue University and International Fresh Produce Association.

Conversations on Metadata Management underscored the importance of collaboration, data management, and technology in inspiring model creation, insights, and policy to improve soil health practices and promote sustainable agriculture.

The following summary points were raised in our discussions:

  • There is a need for collaboration between different organizations and stakeholders, especially industry. There are many systems and standards—working together will assure the best outcomes.
  • The need for transparent and extensible frameworks to make data more accessible and comparable.
  • There will be changes. We need to incorporate standards and protocols for managing change.
  • The use of open-source software and technology can help in data interoperability and standardization.
  • The involvement of public and private actors in advancing a solution to meet the needs of all stakeholders.
  • The significance of community-driven science and participatory research.
  • The potential of soil data measurement, farm management information systems, and mapping technologies to unlock the full potential of the supporting insights tools.
  • The importance of data sovereignty and transparency for all stakeholders.

Our discussions were focused in the following overarching areas: 

Effective data governance and security require careful consideration of several key factors. These include establishing clear authority levels, documenting data privacy and consent management, defining data rights and obligations for stakeholders, ensuring transparent processes for data sharing, determining ownership, promoting interoperability, automating reporting, and implementing certification frameworks. By implementing these measures, businesses can foster accountability, protect privacy, and inspire trust in their data management practices.

During the discussion, we also delved into important topics such as interoperability, data sovereignty, automated attestation, and the utilization of open-source tools such as the AGAPE certification framework. Additionally, there was a focus on permitting labs to share soil test results with NRCS. We explored some of the challenges and potential solutions in these areas (see References), emphasizing the importance of maintaining control, trust, and collaboration for enhanced management and innovation across various domains.

Data, with linked documentation and semantic resources, adds valuable context and insights. It can be connected to on-farm data systems and tools, enabling aggregation of metadata at various levels. Engaging with diverse systems assists in research, auditability, and supports predictive and analytical modeling. In this pursuit, prioritizing published and well-documented APIs is crucial. The versatility of data extends to educational programs, rendering it a valuable resource for teaching and learning. In summary, data presents endless opportunities for developing new models, exploration, and innovation.

The agricultural sector holds immense potential in utilizing the available data for various applications like analysis, modeling, and education. Ensuring statistical relevance and correlation through data analysis becomes crucial, which may involve making subsets or derivatives of data accessible to the public. Leveraging existing farm management information systems like Agworld or FarmOS can significantly enhance data quality and reliability. These tools, integrated with accurate and diverse data sources, are invaluable for conducting research and generating obfuscated data for multifaceted purposes.

Throughout these discussions, the importance of implementing continuous improvement frameworks and fostering an interactive process that incorporates stakeholder participation became evident. These conversations underscored the need to embrace opportunities for learning and growth in order to continually evolve and advance.  

From soil data measurement and modeling for near-term regulatory needs to AI and machine learning that may unlock new insights, the potential for value creation is vast. Transparent standards, community-driven solution design, and built-in data sovereignty are crucial for all stakeholders.

This comprehensive discussion delved into various key areas crucial for effective data management and collaboration. It placed particular emphasis on standardized units of measurement, capturing and addressing uncertainties, and implementing strategies for continuous improvement. The importance of relating environmental data, establishing robust data governance practices, and seamlessly connecting systems was also brought to the forefront. Additionally, the conversation underscored the potential for unlocking value through information sharing and insights, while highlighting the need for ongoing improvement and active stakeholder engagement. Ultimately, this discussion served as a reminder of the paramount importance of data control, trust, and collaboration in driving effective management and fostering innovation across various domains.

The concluding session of the event used arrows ↩ to denote areas of the board where solutions should be focused, and red stickies to suggest ideas, projects, or paths forward in those areas.

Participants offered a set of proposed recommendations and existing methods to mitigate the potential of a data preparation boondoggle:

1. Need for improved field data collection systems: The company OurSci takes the approach of creating common “question set” libraries that use community-identified standard vocabularies design to help “pre-align” data inputs with their downstream intended use, as exemplified in their field data collection tools SurveyStack and SoilStack. [axilab user research]

2. Need for improved interoperability among data collection systems: There is a need to adopt common agricultural vocabularies, ontologies, and data standards. Though there will always be stylistic differences among systems, participants note the need for community consensus on how we should be structuring farm data sharing through ongoing efforts to resolve this issue.  AgGateway, a consortium of agricultural and technology industry partners, has been developing the ADAPT Framework that consists of an Agricultural Application Data Model, and API, and a suite of data conversion plugins, all designed to meet their set of proposed industry standards to “simplify [data] communication between growers, their machines, and their partners.” The Purdue OATS Center takes yet another approach through [OADA + AGAPEcert].

3. Need for improved participatory data and protocol stewardship process: includes the collection of background data to determine baseline constraints and land status; Need to include more stakeholders in the protocol design process. In a collaborative effort among NRCS, Purdue Agricultural Informatics Lab, OurSci, FarmOS and OpenTEAM needs assessment.

4. Need for open soil data collection protocols: Through this entire process, participants note for the SOP to be a living but versioned document.

A few focus areas arose from the chaos near the end of the day.  The quality assurance process was highlighted as a focus area, with ideas for tackling that as certifications which can be passed along from a lab, and passed along from a set of samples with quality control duplicates which could provide a level of trust with the lab results as they move on through the models. 

The largest focus area was around institutionalizing the use of the MODUS standard for soil sampling lab results.  An idea for tackling this was to make tools that make it easier for people to use MODUS than not.  Building the MODUS community, participating in the Ag Gateway standards committee, adding MODUS requirements for program participation, and building open source tools and libraries are all in the mix.  Though it was in a different area of the board, there was a suggestion to build a database of supported bulk density methods and soil carbon methods used by specific labs which is an ongoing critical effort around the MODUS standard enabling the existing open source tooling to work with many different labs’ reporting.

Finally, in order to kick start the process, the suggestion was made at the “reporting to a centralized data repository” level to create an open source, redeployable implementation of a potential centralized platform.  This way, the API’s and schemas can be initialized and iterated across parallel proof-of-concept pipelines, while maximizing the likelihood that an eventual centralized platform (or network of platforms) will easily interoperate with smaller-scale early developments and industry platforms.

We identified four summary actions to be taken:

  1. Define what data needs to be collected by data collectors or inputters and define the common data structures, ontologies, and associated metadata for model data. This includes defining a minimum data set necessary to start the model, additional data you might want to collect now for analysis, and also data you might want to collect in the future.
  2. Develop a QA/QC process for data imputed into the model and develop policies for handling “missing data.”
  3. Develop a process for harmonization or interoperability between this model and other models.
  4. Develop “user communities” to make sure their needs are captured in the development of the model. Three kinds of users were identified:
    1) Data collectors & inputters to feed the model,
    2) Other researchers & modelers who may want to collaborate to refine the model, and
    3) End users who will utilize the output from the model.

We identified the following actions to be taken:

  1. Establish a comprehensive data dictionary detailing variables and units. Assistance for this project is available through the IRA-GHG initiative. For further details, please visit AgGateway’s website at www.aggateway.org.
  2. Create a hierarchical data measurement protocol that’s been expertly validated. The aim is to seek a flexible solution that’s neither overly prescriptive nor restrictive.
  3. Develop comprehensive guidance on metadata governance, security, and privacy by engaging all relevant stakeholders, including IT, legal, and technical experts. It is crucial to fully comprehend the intricacies of data privacy, consent management, and the rights and responsibilities of various parties involved.
  4. Explore and outline data sharing protocols, data ownership, and the challenge of maintaining interoperability while upholding data sovereignty. Seek out industry best practices and leverage open-source tools for automated reporting, utilizing the OODA LOOP framework. For validation of claims without compromising sensitive information, consider the implementation of the AGAPECert automated certification framework.
  5. Identify and resolve the need to manage uncertainty through the capture of temporal-geolocation information. Additionally, there is a requirement to record and report the measurement of uncertainty in the data.  
  6. Determine publicly available data related to soil, which can be connected with other data available from stakeholders likely available in farm management systems (FMS). Develop methods for how the soil data can be reliably associated with on-farm metadata such as fields and boundaries.  Map use cases for how the data is aggregated at different levels, including farm, regional, and national, to assist in frameworks and guidance documents.

Despite the number of branches we found on this complex tree, we certainly identified some low-hanging fruit ripe for the picking. The issues at hand are broad and solutions will inevitably affect a diverse group of differing interests (government, researchers, modelers, farmers, ag service professionals, carbon markets, etc.).  The picture is therefore like a wide panorama, and while a panorama can be beautiful, its beauty can be distracting and even paralyzing: the bigger the picture the longer you stare at it. Panoramas are best built as a series of interconnecting pieces: each individual piece seems much more tractable than the whole, so there is much less time needed for admiring the problem’s complexity. If each piece is built in isolation, the picture will never come together in the end, but if one looks only to “the big picture,” they can never get any piece of it actually done.

The key to success here is therefore both parallelism (building pieces of the picture) and end-to-end design (build a crude “total picture” to inform the design of the pieces).  The connection points between the pieces represent data interfaces between players and will become the lingua franca that glues it all together.  One cannot successfully design such things without the necessary feedback loop of hypothesis and verification: i.e., we think this is mostly the right model, now let’s try it as a proof of concept and see where we’re wrong so we can modify the model and try again.

To that end, the lowest hanging fruit that came out of this event was the idea of focusing on a full end-to-end, narrow use case, and building a toolset in the community that enables coordinated development across more use cases in parallel.

The narrow use case proposed was dubbed “Corn-to-Comet”: collect some actual soil samples and specific practice data (cover crops?) on a few actual fields that produce corn, get that data to the COMET model, get the COMET outputs back to the stakeholders, and end with an overall dataset in a form that could be available for other models and model researchers, all with appropriate consideration of reasonable privacy and use rights.  The actual crop/fields chosen for this pipeline should reflect whoever is willing to participate, so the crop may or may not be “corn” in the end.

Backstopping this effort would be the open source development of a deployable “USDA platform” concept. The final outcome of the overall picture will involve reporting soil samples and practice data to a USDA-maintained system which can make such data available to model researchers. With this “redeployable data platform,” individual developers or projects can stand up their own instance of what an eventual USDA platform would look like using consistent schemas and build their own pieces of the puzzle to interface with their instance of the platform.  Versions of the schemas and platform code will then serve as an incremental integration test between the various pieces, and its open source nature provides the feedback channel necessary for the big picture to come together.  The “Corn-to-Comet” pipeline would use this proof-of-concept tool, and as additional early pipelines are begun from other crops to other models they have some help and examples to get started.

Finally, to keep the valuable insights of the broad tent of community that participated in this event, a future event should be planned which gives participants a chance to talk through design issues with a parallel “hackathon” or “collabathon” among developers to build out real parts of the corn-to-comet pipeline as well as make some proof-of-concept new tools for other pipelines.

We made an effort to capture references, keyword clusters and tags related to the event storm activity. Many participants highlighted collaborations already underway that should be evaluated.

AGAPECert Certification Framework: The AGAPECert framework is an automated certification system that aims to validate claims, exchange derivatives, and link related data across domains without revealing private data. This could be a useful solution for ensuring trust and transparency in various domains where certification or complex security policies are required.

AgGateway: AgGateway is a non-profit industry consortium focused on agricultural data interoperability and standards. They manage the MODUS lab test data standard, which is widely used for reporting soil test results.

Agworld: The Agworld ecosystem allows you to collect data at every level of the operation and share this data with everyone that matters. Agworld operates on over one hundred million acres across a broad range of commodities and environments. APIs and integrations with leading technology providers enable all stakeholders to work together on the same set of (field tested) data.

Collaboration and Innovation: It’s important to provide an opportunity for public and private actors to collaborate and find innovative solutions that promote better science, new innovations, and sustainable agricultural practices.

Community Engagement: The team emphasized the importance of community-driven design, research, and engagement.

Data Infrastructure and Management: The IRA-GHG Quantification Program aims to harmonize data for scientific standards and interoperability. Coordinating feedback, developing technical specifications, and creating infrastructure for data management are key aspects of this program.

Data Interoperability & Sovereignty: Standardizing protocols and developing open-source software can help improve data sharing and integration across different stakeholders.  The Trellis Framework from Purdue University’s OATS (Open Ag Technology & Systems) Center was referred to as a resource for practical MODUS tools and the OADA and AGAPE toolsets for sovereign interoperability at scale.

Farm Foundation: Farm Foundation is a non-partisan, non-profit dedicated to accelerating people and ideas in agriculture. Their mission is to build trust and understanding at the intersections of agriculture and society by creating multi-stakeholder collaborations. Their strategic priority areas are digital agriculture, market development and access, sustainability, and farmer health.

OpenTEAM: OpenTEAM is an open technology ecosystem for agricultural management that aims to facilitate data interoperability and community-driven science. It brings together stakeholders from various sectors to collaborate on improving data systems in agriculture.

SoilBeat and EarthOptics: SoilBeat and EarthOptics are companies that leverage AI, machine learning, and real-time data mapping to provide insights into soil health. Their technologies help agronomists and farmers make informed decisions about nutrient management and regenerative practices.

Soil Data Management: The National Soil Survey Center and USDA-NRCS play a crucial role in collecting, processing, and delivering authoritative soil data. Their work supports conservation planning and land management efforts.

Soil Health Tech Stack: The “Soil Health Tech Stack” is a term coined by Seana Day in an article that outlines the challenges she sees based on, among other things, her work co-authoring the USFRA Transformative Investment report about how technology and finance could scale climate smart, soil-centric agriculture practices as well as on information gathered during the Farm Foundation Regenerative Ranching Data Round Up. The “Fixing the Soil Health Tech Stack” activities will build upon those efforts and others. As such, the event will leverage pasture/rangeland data but with the goal of extending solutions to all soil-based agriculture production ecosystems.

#data #comparability #defining

  • It is important to agree on data standards and make it someone’s primary job to ensure data comparability
  •  Developers play a crucial role in defining data models and schemas
  •  Define data rights and obligations for different stakeholders
  •  Determine how data will be shared and who owns the data

#design #database #structure

  •  Design a flexible database that is not too prescriptive/restrictive
  •  Design the database structure effectively

#measurements #define #accuracy

  •  Consider using ISO19156 framework for observations and measurements
  •  Explore ways to upscale measurements
  •  Define units of measurement for reporting and storage
  •  Define measurements of uncertainty and accuracy for field measurements

#protocols #data #expert

  • Data lineage on lab results and field data should be published as part of versioned protocols
  • Publish metadata standards as protocols
  • Develop a data measurement protocol hierarchy vetted by experts
  • End-users of data may require specific protocols for sampling

#documented #data

  •  QA/QC standards should be documented to ensure data quality
  •  Documentation should be linked to other data products for easy reference
  •  Data collection purpose should be clearly documented
  •  Data sample design should be documented to ensure representativeness
  •  Data schemas should support inclusion of existing public datasets
  •  Provide detailed documents to help users navigate the data

#changes #year #data

  •  Implement change control for data schema/model changes
  •  Handle fiscal year/periodic changes in data
  •  Plan for iterative improvement process in February
  •  Develop a data variable dictionary for easy reference

#standards

  •  Consider ISO standards in database construction
  •  Leverage the good aspects of ISO and other standards

#dimensions #geography #capturing

  •  Standardize geography and time dimensions for better data integration
  •  Consider capturing uncertainty with temporal-geolocation information

#methods #methodologies #computation

  •  Methods should be documented for each observation
  •  Document computation methodologies

#agreements #sharing #governance

  •  Metadata governance, security, compliance requirements, search, and collaboration tags should be defined
  •  Policy changes may be needed for data sharing within governance/consent agreements
  •  Achieve interoperability while maintaining data sovereignty
  •  Contractual agreements may be necessary for data sharing

#claims #automated #certification

  •  Facilitate automated reporting of practices
  •  Implement an automated certification framework to validate claims without revealing private data

This post-event report was contributed by Ankita Raturi, assistant professor of agriculture and biological engineering at the Agricultural Informatics Lab & Open Ag Tech and Systems Center Purdue University; Rob Trice, founder of The Mixing Bowl and Better Food Ventures; and Drew Zabrocki, co-founder of Centricity by Semios, advisor, OATS Center at Purdue University and International Fresh Produce Association.

Views expressed do not necessarily reflect the opinions of all participating organizations.

The post Post-Event Report: Building Beta Data Management Protocols for Soil Carbon GHG Quantification appeared first on Farm Foundation.

]]>
Fixing the Soil Health Tech Stack Now: An 8-Step Action Plan https://www.farmfoundation.org/2022/11/07/fixing-the-soil-health-tech-stack-now-an-8-step-action-plan/ Mon, 07 Nov 2022 17:08:26 +0000 https://www.farmfoundation.org/?p=8794 By Rob Trice. This article was first published in AgFunder Network. It is reposted with permission. This is an exciting...

The post Fixing the Soil Health Tech Stack Now: An 8-Step Action Plan appeared first on Farm Foundation.

]]>
By Rob Trice. This article was first published in AgFunder Network. It is reposted with permission.

This is an exciting time in soil health, with USDA unleashing $2.8 billion in funding for climate-smart commodity agriculture. The hope is that organizations receiving grants for soil health work will spend this money in a manner that best serves the public interest.

Working together to fix the soil health tech stack is part of this process. We can do this by harmonizing and standardizing soil measurement data and analysis. This will build a thriving marketplace that rewards farmers and ranchers for taking actions and delivering outcomes that result in healthier soils and carbon sequestration.

The concept of the soil health tech stack — the three-layer graphic below — and the need to bridge its layers were introduced in September 2021 in an article by my colleague, Seana Day.

Over several months in 2022, Farm Foundation and its partners, including The Mixing Bowl and TomKat Ranch, held three interwoven events to address the challenges of the soil health tech stack. Specifically, we looked to bridge soil data interoperability, calibration, or standardization, and ease the movement of digital information between the entities collecting, analyzing, and taking action with soil-centric data.

An Overview of Fixing the Soil Health Tech Stack Events

Soil Sampling Campaign: In May to June 2022, robust soil sampling took place at TomKat Ranch, an 1,800-acre regenerative cattle ranch in Pescadero, California. Point Blue Conservation Science, along with students from Skidmore College working with the non-profit The Soil Inventory Project collected over 1,000 soil samples. These were analyzed for total percent carbon by dry combustion at three separate analytical laboratories. Bulk density was measured at a subset of sampling locations to create a robust soil carbon data layer across five pastures where TomKat had applied treatment regimes.

The Soil Data Hack: The Purdue Open Agriculture Technology Systems (OATS) Center took the soil data results from the campaign to create a publicly available data set and combined it with TomKat’s historical soil data and other soil data sets. This was to establish a data foundation for a soil data hack that took place during the “Fixing the Healthy Soils Tech Stack” virtual conference from August 23 to 24, 2022. 

All of the soil data was put into the MODUS data standard. MODUS defines data terminology, metadata and file transfer formats to expedite the exchange, merging, and analysis of soil and other agriculture testing data. It is used by some but not all soil labs today.

AgGateway’s Laboratory Data Standardization Working Group is upgrading MODUS to MODUS 2.0 and is a key proponent of its wide-range adoption by all labs analyzing soil. As AgGateway outlines here, the use of standardized soil data can help scale the efficiency of a low-margin business, decrease errors, improve lab turnaround times, and feed data to farm management information systems (FMIS) for analysis and recommended action. 

The Soil Data Hack, a two-day hackathon-style event, was designed to make tangible progress toward fixing the soil health tech stack by encouraging participating developers to create open source code to help with the transfer and presentation of soil-related data in a common medium. 

Fixing the Soil Health Tech Stack Conference: The virtual event was four hours long on both Tuesday, August 23rd and Wednesday, August 24th, 2022. The broad arc of the conference topics included an overview of the soil data hack, the concept of the soil health tech stack, and how to fix it.

Results From the Fixing the Soil Health Tech Stack Events
The soil sampling campaign helped us develop a clean data set in the MODUS data format. It also helped us better understand the disparity in lab analysis, as two labs got the same soil samples and came back with different results for both soil carbon and bulk density.

During the two days of the Soil Data Hack, the hackers:

  • Took the MODUS-based soil data and turned it into a JSON format
  • Fed the data into a business intelligence and data visualization system (Power BI)
  • Pulled it into OpenTEAM‘s open source FarmOS FMIS
  • Leveraged FarmOS to associate soil data to GPS lat/long
  • Used RDF (a World-Wide Web Consortium standard data description and exchange format) to put soil data on the blockchain and make it available for Regen Network’s carbon credit program
  • Linked the MODUS data to any HTML browser for visualization
  • Used HTML to compare different soil data

The virtual conference revealed broad recognition for the need to fix the soil health tech stack. Videos from the virtual conference and hackathon are live on the Farm Foundation YouTube channel. The videos include the conference sessions and also the report-out from the hackathon. 

Weaving Together Existing Solutions
Through our events, we determined that, by weaving together existing solutions, we can make great strides in fixing the soil health tech stack. Specifically:

  • A dynamic framework for monitoring soil health exists as part of  Point Blue Conservation Science’s Range-C and forthcoming Crop-C Monitoring Projects.
  • A solid data collection method exists with The Soil Inventory Project’s approach for in-field, distributed soil sampling and transfer of samples to soil labs for analysis.
  • A lab sample prep standard operating protocol exists in what the Soil Health Institute has developed, and it will help minimize testing errors and variance between soil labs.
  • A soil data standard exists in the form of MODUS 2.0 as maintained by AgGateway.
  • Sovereign MODUS soil data exchange can occur through platforms like Purdue OATS Trellis to transfer data to XML or JSON. It can also ink to other interoperable farm data applications like OpenTEAM’s FarmOS and Ag Data Wallet, the USDA’s Producer Operational Data System or other FMIS programs. The key point to underscore is that data transfer tools exist to enable the data owner to manage what data is shared when and with whom.
  • Large-scale open aggregated MODUS soil data sets can be made available for analysis through tools like the OpenTEAM Digital Farmer Coffee Shop.  

Additional effort is now required to build out the interoperability between these tools and others. 

The 8-Step Action Plan to Fix the Soil Health Tech Stack

1. Upgrade and invest in soil health testing infrastructure

It is widely acknowledged that the United States has the most robust soil lab infrastructure in the world. However, much of that infrastructure was established to measure soil type and soil chemical nutrient levels. With more focus on soil organic carbon and rising interest in microbial analysis, we need to upgrade soil lab capabilities to account for new demands.

In addition to physical measurement equipment, we should also add knowledge management infrastructure to enable the digitization of lab data in a way that will support levels of provenance, attribution, and sharing consent management.

We should also not overlook the need to train and staff an adequate number of lab personnel facile with the new testing equipment and digital data tools.   

2. Expand the use of standardized soil field collection methods to create a national soil health inventory database

Non-profit project The Soil Inventory Project (TSIP) was awarded a $20 million USDA grant to fund climate-smart practice adoption on 120,000 acres nationwide, and apply their distributed inventory system to monitor soil health outcomes. This effort is part of their wider effort to create a distributed national soil health database using scientifically proven and affordable methods for collecting and analyzing soil data. With its USDA grant, TSIP is better positioned to help other organizations overcome the cost and burden of collecting large-scale soil data in the field.

We should also embrace the further development of new techniques and methods that can ease the time and resource burden of in-situ soil sampling. Additionally, with the establishment of a large, ground-truthed soil database, one day we may hopefully be able to undertake soil analysis simply through the use of airborne remote sensors. 

Soil data should be aggregated in a national soil health inventory database (such as the The Soil Inventory Project is undertaking). That database should be an accessible versioned, searchable registry of measurement protocols enabling interoperability of results. Machine learning data sets may be fed input from multiple models (field measurement, lab measurement, and remote sensing) and a registry of measurement protocols will allow the comparison and calibration of analysis between measurement methods.  

Of course, soil data should be shared securely in an anonymized and aggregated fashion to establish regional baselines along the lines of FAIR (Findable, Accessible, Interoperable & Reusable) data standards. 

3. Embrace standardized frameworks for field monitoring of soils

To promote apples-to-apples monitoring of agricultural lands, common frameworks and stratification tools should be promoted to help farmers and ranchers select indicators, develop study areas, determine how many samples to take and when and how to ensure data quality. Point Blue and its partners have received a USDA grant to promote the adoption of the Range-C Monitoring Framework to assist farmers, ranchers, and researchers with these tasks. Future standards such as Crop-C will be versioned to adapt to upgrades in technology and new protocols referenced in a common shared registry.

4. Promote and adopt lab soil sample preparation standard operating procedures

To minimize the discrepancies that can result from different labs handling soil sample testing differently, we recommend an effort be made to promote all soil labs to abide by the Soil Health Institute’s soil sample preparation standard operating procedures that were developed as part of their “North American Project to Evaluate Soil Health Measurement.”

Labs can capture whether they have followed the protocols through tools made in Survey Stack or the Question Set Library. It will be helpful to have information related to the protocols in the metadata that travels with soil samples from the field through lab testing.

We should also allow for the Soil Health Institute SOPs to keep a registry of versions and that labs can capture the SOP version they are following when they are undertaking a soil analysis.

5. Promote the development of tools using MODUS to make it the standard format to harmonize soil data

We need to see MODUS adopted as the data output of soil labs. AgGateway is the leader in seeing the MODUS data format application in agriculture, and deploying infrastructure to make the definitions of its codes be machine-readable and machine-actionable. Right now they are in the process of promoting the 2.0 version and we should anticipate future versions will arise. Labs should make soil analysis results available in MODUS and available to clients online (in online data formats like CSV, JSON or FarmOS, not just in hard copy or PDF format).

During our two-day hackathon, we saw both a state government’s department of agriculture as well as a large digital agtech company commit to using the MODUS-based tools we developed. We should leverage these “early adopters” to refine the tools and then promote them heavily to gain adoption.  

Adoption of standardized soil data tools should not be limited to the US. The MODUS data standard needs to become a global format for soil data. Additionally, we should anticipate the need for data standards for future soil analysis. For instance, FAO’s GLOSOLAN has already created a standard approach for soil spectroscopy that should be promoted for global harmonization. Internationally focused actors like CGIAR, OpenGEOHub, and LandPKS are potential allies to create a global soils ledger.

The simplest way to promote MODUS is to provide resources for the development of MODUS-based tools and to promote the use of MODUS-based tools amongst the developer community.

6. Address sovereignty of agriculture data

While technical solutions exist to maintain a farmer or rancher’s sovereignty over data and data sharing, many in agriculture are unaware of these solutions; most software solutions in agriculture do not use these tools today. We need a conversation to get over the constant boogeyman of “data privacy” so that we can provide those who choose to share data the confidence that those receiving data will use it in a transparent manner for appropriate purposes.

Conditional data use agreements and consent management processes controlled by the producer need to be more widely embraced. The conversation needs to include the benefit to the farmer/rancher of sharing data and also needs to address (beyond technical matters) the social and legal aspects of implementing a trusted solution. 

7. Get soil health data on the balance sheet

We need a discussion on how to account for soil data on the balance sheets of farming and ranching operations. Is it possible to create a single score that encompasses the soil health of agricultural land (similar to a corn suitability rating, for instance)?

While there are emerging markets for soil carbon and other environmental marketplaces, financial recognition of healthy soils appears to be undervalued by land buyers, lenders, and insurers.

Research exists to show the long-term benefits of healthy agricultural soils. Tools do exist to score soil health, to hold environmental claims data and to enable benchmarking of data. However, where are market participants like banks and insurance companies in terms of adopting the use of soil health assessment tools in their financial products? 

8. Adopt a common semantic infrastructure for soil health

An additional source of friction occurs when different layers of a tech stack use different semantics (e.g., variables, controlled vocabularies, etc.) because this introduces the need to map/translate among them.

We need to develop and adopt a semantic infrastructure (a common set of variable definitions and their associated controlled vocabularies, distributed using APIs) shared across the tech stack to make communications easier. AgGateway’s Agrisemantics Working Group is implementing such an infrastructure to distribute MODUS Codes, their definitions, and other semantic resources.

With the addition of the semantic infrastructure, the soil health tech stack now looks like this:

There was keen interest amongst the event participants to turn our talk into action and our refine our tools to fix the soil health tech stack. I welcome your ideas on how we can move forward collaboratively on these initiatives.

Rob Trice is founding partner of The Mixing Bowl and Better Food Ventures. Other contributions to and reviews of this article include: Aaron Ault (Purdue Open Ag Technology Center). Chelsea Carey (Point Blue Conservation Science), Kris Covey (The Soil Inventory Project & Skidmore College) Dorn Cox (OpenTEAM), Andres Ferreyra (SyngentaAgGateway member/volunteer), Martha King (Farm Foundation), Wendy Millet (TomKat Ranch), Cristine Morgan (Soil Health Institute), Liz Rieke (Soil Health Institute), Drew Zabrocki (Semios).

The post Fixing the Soil Health Tech Stack Now: An 8-Step Action Plan appeared first on Farm Foundation.

]]>
Where’s My Stuff?: Supply Chain Virtual Event Storming https://www.farmfoundation.org/2022/10/20/wheres-my-stuff-supply-chain-virtual-event-storming/ Thu, 20 Oct 2022 22:00:50 +0000 https://www.farmfoundation.org/?p=8766 Farm Foundation, in partnership with the Supply Chain Optimization and Resilience (SCORe) Coalition/ASTM standards development process, is convening a multistakeholder,...

The post Where’s My Stuff?: Supply Chain Virtual Event Storming appeared first on Farm Foundation.

]]>
Farm Foundation, in partnership with the Supply Chain Optimization and Resilience (SCORe) Coalition/ASTM standards development process, is convening a multistakeholder, virtual event on November 7 and 8, 2022, focused on creating shared understanding of the role standardized, interoperable digital data can play in the supply chain.

The discussion will focus on the status of standardized, interoperable data in multi-party supply chains, as well as the current challenges and needs. Data sovereignty and privacy within the supply chain and the tools available to allow information owners to control what information is shared, with whom, when and for what purposes will also be covered.

ASTM International recently formed a new committee (F49) to develop recommended frameworks, standards, best practices, and guides related to the sharing and use of digital information across the supply chain. The ASTM effort directly relates to the United Nations Sustainable Development Goal #9 on industry, innovation, and infrastructure.

The post Where’s My Stuff?: Supply Chain Virtual Event Storming appeared first on Farm Foundation.

]]>
Fixing the Soil Health Tech Stack https://www.farmfoundation.org/2022/05/05/fixing-the-soil-health-tech-stack/ Thu, 05 May 2022 15:01:50 +0000 https://www.farmfoundation.org/?p=7969 Recorded conference session may be accessed on the Farm Foundation YouTube channel. Fixing the Soil Health Tech Stack: Gathering for...

The post Fixing the Soil Health Tech Stack appeared first on Farm Foundation.

]]>
Recorded conference session may be accessed on the Farm Foundation YouTube channel.

Fixing the Soil Health Tech Stack: Gathering for Action was a two-day virtual conference taking place August 23-24, 2022. It was comprised of three interwoven activities: a soil sampling campaign, a soil data hack, and the “Fixing the Soil Health Tech Stack” virtual conference.


The “Soil Health Tech Stack” is a term coined by Seana Day in an article that outlines the challenges she sees based on, among other things, her work co-authoring the USFRA Transformative Investment report about how technology and finance could scale climate smart, soil-centric agriculture practices as well as on information gathered during the Farm Foundation Regenerative Ranching Data Round Up. The “Fixing the Soil Health Tech Stack” activities will build upon those efforts and others. As such, the event will leverage pasture/rangeland data but with the goal of extending solutions to all soil-based agriculture production ecosystems.

The Soil Health Tech Stack, as defined by Seana Day, partner at Culterra Capital.

Three Components of the Soil Health Stack Conference

Preceding the August conference was an intense “Soil Data Sampling Campaign” that developed a robust data set for a “Soil Data Hack” that ran concurrently with the August conference.

The Soil Sampling Campaign

Starting in mid-May, a robust soil sampling effort occured at TomKat Ranch, an 1800-acre regenerative cattle ranch in Pescadero, California. TomKat Ranch collects ecological data on the ranch by participating in Point Blue Conservation Science’s Rangeland Monitoring Network, including soil tests across the ranch starting in 2014. These data are publicly available as part of the TomKat Ranch Data Project.

Point Blue Conservation Science, along with non-profit The Soil Inventory Project collected approximately 1,800 soil samples that then had soil carbon percentage and bulk density analyzed by three different soil labs to create a robust data layer across five pastures where TomKat has applied treatment regimes. The soil sampling campaign used different measurement methods to account for and detect carbon and bulk density in the soils in these pastures.

The aim of having different tools analyzing similar soil samples across pastures was to help to describe the accuracy and cost of different soil analysis measurement and mapping tools and the variance that occurs between methods.

The Soil Data Hack

By the beginning of August, the Purdue Open Agriculture Technology Systems (OATS) Center took the data results from the Soil Sampling Campaign to create a publicly available data set that served as the data foundation for the Soil Data Hack that took place during the “Fixing the Healthy Soils Tech Stack” conference August 23-24.

The Soil Data Hack was designed to make tangible progress toward fixing the soil health tech stack immediately by having participating developers create open source code to help with the transfer and presentation of soil-related data in a common medium through a hackathon-style over a two-day period. The conference and hackathon ran in parallel, with opportunities for report-outs by hackathon participants during the meeting. The results of that work will help to form the foundation of soil health data interoperability.

Fixing the Soil Health Tech Stack Virtual Conference

The virtual conference was four hours long on both Tuesday, August 23rd and Wednesday, August 24th. The broad arc of the conference topics included an overview of the soil data hack, understanding the concept of the soil health tech stack, and how to fix the soil health tech stack.

A post-conference summary will be published by Farm Foundation and shared with meeting participants.

These activities are being led by Farm Foundation in partnership with The Mixing Bowl, The Soil Health Institute, Point Blue Conservation Science, the Purdue Open Ag Technology & Systems Center, Semios, The Soil Inventory Project, and the TomKat Ranch Educational Foundation.

The post Fixing the Soil Health Tech Stack appeared first on Farm Foundation.

]]>
The Regenerative Ranching Data Round Up https://www.farmfoundation.org/2021/09/23/the-regenerative-ranching-data-round-up/ Thu, 23 Sep 2021 13:34:15 +0000 https://www.farmfoundation.org/?p=6731 Megan Shahan is a regenerative food and agtech consultant with The Mixing Bowl, Better Food Ventures and TomKat Ranch. Continuing...

The post The Regenerative Ranching Data Round Up appeared first on Farm Foundation.

]]>
Megan Shahan is a regenerative food and agtech consultant with The Mixing Bowl, Better Food Ventures and TomKat Ranch.

Continuing Farm Foundation’s work to advance data interoperability in agriculture, The Mixing Bowl, Purdue OATS, Centricity, and other collaborators hosted the Regenerative Ranching Data Round Up on August 24, 2021.

Regenerative ranchers manage grazing and/or browsing animals (such as cattle, sheep, or goats) with the intent to achieve specific ecological, economic, social, and management objectives. Regenerative grazing (or prescribed grazing) is recognized by the USDA as a climate-smart agriculture practice. The growing pool of scientific studies and farmer/rancher case studies quantifying the ecological, economic, and social impact of production methods—as well as the increasing urgency of addressing climate change—is adding to the mounting interest in climate-smart practices, like regenerative grazing, among scientists, policy makers, producers, eaters, and businesses. 

Numerous corporations have announced significant investments and initiatives designed to support and scale regenerative practices across their global supply chains, including Cargill, General Mills, Land O’ Lakes, Danone, McDonalds, PepsiCo, and Nestlé among others. These initiatives signal a notable shift in how companies are planning for and adapting to future climate risks across food and agriculture supply chains and are critical efforts to build resilience into our food system. And yet, regenerative grazing is practiced on only about 1% of US rangeland today1. Access to locale-specific information (including technical assistance) and improved data flow are necessary to scale this practice both in the US and abroad.

To that end, the Regenerative Ranching Data Round Up gathered a large, diverse, and global group of regenerative ranchers, landholders, value chain partners, software providers, conservationists and land trust representatives, scientists, academics and more to link the information flows necessary to implement and scale the practice of regenerative grazing. Participants placed ‘sticky-notes’ on a virtual whiteboard to develop a community-led understanding of the regenerative ranching sector and highlight common data challenges.

The next event, The Regenerative Ranching Data Rodeo, will gather coders to write real world code to make real world software to help solve some of the data challenges identified in the Regenerative Ranching Data Round Up.

Farm Foundation and its partners are in the planning phases for the Regenerative Ranching Data Rodeo. If you or someone you know would like to be involved in the Rodeo, please contact Martha King or Todd Price.

Data interoperability for regenerative ranchers

The ability to move data between systems and devices remains a sticking point across the agriculture industry, impacting established commodity production models and diverse, more localized regenerative production models in much the same way. The resulting data bottlenecks limit insights, innovation, and efficiency throughout the supply chain, hindering daily farm or ranch operations and contributing to a food and agriculture system with lesser agility and resilience to adapt, withstand, and respond to future shocks and disruptions.

In my work with TomKat Ranch, a regenerative ranch in Northern California, I see the need for data interoperability firsthand. Regenerative management is a continuous cycle of taking action, measuring results, and refining new actions based on the outcome of previous actions. Rather than maximizing a single goal, regenerative ranchers use careful adaptive management to best align timing, location, duration, and intensity of grazing and other agricultural activities with the multiple economic, social, and environmental goals of the rancher and other stakeholders. Investments in these goals must be justified or paid for primarily through the sale of cattle, and hence planning and goal setting is an ongoing process of optimizing a system subject to the constraints of trade-offs between goals.

Consistent monitoring protocols can help reduce uncertainty, quantify the outcomes of management practices, and track changes over time. Monitoring efforts vary by ranch, but typically include a mix of visual assessments, manual data gathering, sensor data, and lab analysis, which are themselves a function of available ranch time and resources. TomKat Ranch, in partnership with Point Blue Conservation Science, collects data on soils, birds, rainfall, streams, pasture management and forage production, animal performance and other metrics, in addition to standard ranch and business data. None of this data is (easily) connected electronically, a common struggle for ranchers and farmers across the country.

Connecting these disparate sets of data to enable an integrated view of a regenerative ranching operation will help illuminate real world interactions and trends, allow ranchers to gain actionable insights for responsive adaptive management and improve outcomes from soil to steak. At scale, data interoperability in regenerative ranching supports beneficial outcomes across the multiple interrelated systems that touch—and are touched by—food production on rangelands: healthy animals and nutritious food, healthy soils and diverse microbial communities, and resilient ecosystems, businesses, and communities.

Event storming: regenerative ranching

Through a process known as event storming, collaborators articulated events, decisions, people, and information related to planning, managing, evaluating, and improving four interrelated areas of regenerative ranching:

  1. Soil and Pasture Health / Environment
  2. Cattle / Herd Management
  3. Daily Logistics / Business Success
  4. Certification / Regulatory

Participants placed ‘sticky-notes’ on the virtual Miro board to develop four “big picture” timelines, or industry maps. The industry maps provide insight into what happens on a regenerative ranch, who is making decisions, what questions need answered, and what data is needed at which points in time.

In an effort to produce robust industry maps during our limited time together, the scope of the event storming session was narrowed to a single livestock species: cattle. Regenerative ranchers often graze multiple species of livestock to vary the animal impact on the land and leverage differing forage preferences of the animals (cattle and goats, for example). Insights from this event can be applied broadly to regenerative ranching, but specifics may vary.

The event storming process highlighted the vital connection between ecological health, animal health, and economic health. Core to successfully managing the herd is managing the environment to improve soil health, increase water availability, and produce diverse, high quality forage to ultimately foster a resilient, profitable regenerative ranching business.

Data is critical in goal setting, planning, measuring outcomes, and adapting to improve outcomes on a regenerative ranch. The sheer number of metrics being tracked, coupled with the long time horizon of ecological change and the delay in realized impacts of management decisions, make it difficult for a human to keep track of mentally—data recording, storage, and analysis methods are key to success.

The motivation for data collection ranges from ranch management insights to certifications or participation in carbon markets; the cost and time commitment of monitoring efforts also vary, with carbon markets typically requiring the most intensive and expensive monitoring protocol. Some metrics can be measured digitally via sensors, visually or manually (e.g. forage height, soil structure, soil cover, “fullness” of the cattle, weight gain, conception rate), and others require lab analysis (e.g. water quality, soil organic matter, manure samples).  

There was a clear message from many participants that regenerative practices, if properly communicated through the supply chain, should be able to garner premiums and additional market benefits. Improved communication of data throughout the supply chain, from the ranch to consumers, is critical.

Certification efforts appear to coalesce around two goals: 1) the opportunity to increase demand among eaters through education and marketing of regenerative practices, and 2) the production of certifications necessary to communicate those practices to eaters. There was a sense that the necessary standards are still lacking for regenerative beef and much work is yet to be done to create standards, relate new standards to existing standards, market those certifications, and simplify/streamline reporting, sharing, and record keeping. Participants clearly felt that ranchers need to work together and be included in the certification discussions, enabling them to influence the standards dialogue during such an evolution.

The Regenerative Ranching Data Round Up ultimately highlighted a distinct gap of data capabilities and calibrations, a gap common in younger industries. Various projects to tackle “low-hanging fruit” are necessary to digitize regenerative ranching in a meaningful way, such as:

  • providing digital tools to ranchers working the land and caring for the cattle;
  • improving digital communication between ranchers and processors;
  • developing accessible public datasets for calibration between soil labs and sensors; and
  • simplifying the ability for data used in software for operational decision-making to be automatically re-used for certification data.

For more information on the event storming process as well as a detailed summary of the industry maps, challenges, and ideas that emerged, see the Regenerative Ranching Data Round Up Summary.

What’s next: The Regenerative Ranching Data Rodeo

Just as the event storming process was community-led, so too is the finalization of next steps. Two high-impact challenges stood out for folks to wrangle at the follow up event, the Regenerative Ranching Data Rodeo:

  1. The standardization and calibration of soil sampling data and analysis; and 
  2. Enhancing the communication of the value of regenerative ranching practices (through digital certifications) to buyers and eaters.

The Regenerative Ranching Data Rodeo will likely involve 1) the production of soil lab analysis data in a standard, machine-readable format, 2) utilizing that data to compare and calibrate, and 3) consuming the resulting data via a standardized API to produce privacy-preserving certifications that can be automatically communicated through the supply chain.

Learn more about Farm Foundation’s work to advance data interoperability in agriculture here.

 *******

To partner with Farm Foundation in their work on data interoperability in regenerative ranching or beyond, please contact Martha King or Todd Price

1 US Ag Census (NASS, 2019); ERS (2017)

The post The Regenerative Ranching Data Round Up appeared first on Farm Foundation.

]]>
The Regen Ranching Data Round Up https://www.farmfoundation.org/2021/08/03/the-regen-ranching-data-round-up/ Tue, 03 Aug 2021 22:41:46 +0000 https://www.farmfoundation.org/?p=5528 On August 24, 2021, Farm Foundation, The Mixing Bowl and Purdue OATS are partnering to address data interoperability in the regenerative ranching space.

The post The Regen Ranching Data Round Up appeared first on Farm Foundation.

]]>
Continuing our work to advance data interoperability in agriculture, Farm Foundation, The Mixing Bowl and Purdue OATS, with other collaborators, will host the Regen Ranching Data Round Up on August 24, 2021, at 5 p.m. Central.

The purpose of the Round Up is to gather ranchers who practice adaptive planned grazing, related data solution providers and other critical ecosystem partners (land owners, customers, scientists, financiers) to link the information flows necessary to scale adaptive planned grazing as a regenerative agriculture practice.

Technology and data to scale adaptive planned grazing

Adaptive planned grazing is a scientifically proven, USDA-backed climate-smart agriculture practice that can decrease soil erosion, build soil health, stimulate photosynthesis, promote biodiversity, and sequester carbon. Better data in the hands of ranchers and their ecosystem partners can help to address many of the barriers to scaling adaptive planned grazing.

Additional layers of data can provide better levels of insights regarding observed changes resulting from practice implementation, better comprehension of the business case, and lead to alignment of practice outcomes and marketplace demand.

Ranchers today lack access to critical locale- and practice-specific knowledge to optimize performance of adaptive planned grazing in their operations. Much of the information exchange regarding best practices is done through in-person or online rancher-to-rancher groups. An abundance of case studies from across the world exist but, to date, there is a dearth of information to inform a rancher on the specific actions he or she should take on their soil type, their vegetation, in their climate, with their livestock type to achieve their desired outcomes.

How can we improve the amount, quality and usability of data to make practical operational recommendations for ranchers? How can we use information to de-risk operations transitioning to adaptive planned grazing, accelerate the collective learning curve of adaptive planned graziers, and realize the potential of adaptive planned grazing at scale?

Better information exchange can also inform the economics of a regenerative ranching enterprise and meet information needs of ecosystem partners (customers, bankers, land lessors, insurers and certifiers). There are a number of organizations that offer solutions to collect data from pasture operations. Some are in-field (soil and other sensors), some are lab-based (soil testing), and some are remote (i.e., drone and satellite aerial sensing).

There are also a number of organizations that offer solutions with the potential to connect data from pasture operations with other data. These might include animal health or herd management software solutions; farm/ranch management software solutions (FMS); measurement, estimation, reporting and verification solutions (MRV); external certification or scientific data-gathering tools.

We seek to connect these organizations and their solutions so they work together in an interoperable fashion across the value chain, amongst producer ecosystems and across geographies to accelerate the adoption of these practices, optimize implementation, and maximize the market for regenerative beef.

The Regen Ranching Data Round Up and follow-up events are intended to further the application of data to scaling adaptive planned grazing.

Goal: map supply chain and identify data gaps

This “Data Round Up” event will gather stakeholders from across the value chain to jointly identify actions, actors, and data exchanges that occur, or need to occur—from soil to supper, ranch to ribeye—and in doing so, identify data solutions, data gaps, and data interoperability gaps.

The outcome of this event will be a supply chain map and identified data challenges that can be tackled as a next step to help ranchers and their ecosystem partners achieve success. The map can be shared publicly to benefit anyone interested in solving a data challenge related to regenerative beef. A byproduct of the ensuing discussions and publication of the map will also be insights garnered on non-data issues related to scaling adaptive planned grazing that emerge during event storming discussion.

Farm Foundation, completed a similar “event storming” activity for the pork industry in conjunction with The Mixing Bowl, the Purdue Open Ag Technology Systems (OATS) Center, the National Pork Board and other partners in the fall of 2020. The collaborative mapping exercise helped identify the topic that was the focus of the subsequent event—a “hackathon” to develop and build code for a real world, open source solution to fill a needed data gap. At this event, held March 24-26, 2021, teams collaborated to build an open source, digital “advance ship notification” to connect data between pork farmers, haulers and pork processors. Additional pork industry problem-solving hackathons are being planned for 2021.

What’s next: build data connections and interoperable open-source solutions

Following the Round Up, our goal is to host another event to develop solutions to identify data problems in regenerative ranching through a “hackathon,” similar to the process we used in the pork industry. The goal of this hackathon’s participants is not to win prize money, get PR, or create competition between teams but to work collaboratively to find ways to collect and connect ranching data, calibrate data between solution providers, connect existing solution providers’ data, and hack the development of open source solutions where nothing exists today. More information about the hackathon event will be shared as it develops.

Want to join us for the Regen Ranching Round Up?

Participants from the regenerative ranching space, including ranchers, land owners, and others throughout the value chain are invited to join the Regen Ranching Data Round Up as either active participants, who will be involved in the actual “event storming” to map the supply chain, or as spectators, who can observe the discussion. Register to participate here.

The post The Regen Ranching Data Round Up appeared first on Farm Foundation.

]]>
Farm Foundation® Forum on Digital Agriculture to be Held August 11 https://www.farmfoundation.org/2021/07/22/farm-foundation-forum-on-digital-agriculture-august-11-2021/ Thu, 22 Jul 2021 22:32:21 +0000 https://www.farmfoundation.org/?p=5502 Registration is open for our free event on August 11, 2021.

The post Farm Foundation® Forum on Digital Agriculture to be Held August 11 appeared first on Farm Foundation.

]]>
Free virtual event at 9 a.m. CDT

OAK BROOK, Ill.—Farm Foundation®, an accelerator of practical solutions for agriculture, will host its next virtual Forum, Advancing Digital Agriculture at the Farm Level, on Wednesday, August 11 from 9:00 to 11:00 a.m. CDT.

Shari Rogge-Fidler,President and CEO of Farm Foundation, will moderate the panel, which will include diverse perspectives from these expert contributors:

  • Teddy Bekele, Senior Vice President and Chief Technology Officer, Land O’Lakes, Inc.; Chairman, FCC Task Force for Precision Agriculture Connectivity and Adoption
  • Brian Krambeer, President and CEO, MiEnergy Cooperative
  • Dean Nierling, Farmer; Chair, MiEnergy Cooperative Board
  • Steve Pitstick, Owner, Pitstick Farms

The session will last two hours, during with the panel will explore the barriers to and enablers of on-farm digital agriculture practices, broadly including topics such as rural broadband and connectivity, precision agriculture, data management and meeting the user need. Audience members will have the opportunity to submit questions for the panelists to answer live during the event.

“The past decade has seen great advancements in agtech, but there are still practical hurdles that must be cleared to improve access, adoption and benefits at the farm level,” said Rogge-Fidler. “We are looking forward to a robust discussion that examines the progress that has been made while also highlighting ways to address the remaining digital ag challenges and ensure the greatest opportunities for farmers.”

This event is being held virtually and is free to attend, but registration is required. Farmers, ranchers, food and agribusiness leaders, government officials and staff, industry representatives, NGO representatives, academics, students in agricultural disciplines and members of the media are all encouraged to attend. Register here.

The post Farm Foundation® Forum on Digital Agriculture to be Held August 11 appeared first on Farm Foundation.

]]>
Coders and Observers Invited to Hackathon to Address Data Interoperability in the Pork Industry https://www.farmfoundation.org/2021/03/09/coders-and-observers-invited-to-hackathon-to-address-data-interoperability-in-the-pork-industry/ Wed, 10 Mar 2021 00:25:00 +0000 https://www.farmfoundation.org/?p=5446 The Great Pork Hackathon—Part 1: Shipping takes place March 24-26, 2021 OAK BROOK, Ill.—Farm Foundation®, an accelerator of practical solutions...

The post Coders and Observers Invited to Hackathon to Address Data Interoperability in the Pork Industry appeared first on Farm Foundation.

]]>
The Great Pork Hackathon—Part 1: Shipping takes place March 24-26, 2021

OAK BROOK, Ill.—Farm Foundation®, an accelerator of practical solutions for agriculture, and the Open Ag Technology and Systems (OATS) Center at Purdue University are hosting The Great Pork Hackathon Series—Part 1: Shipping, March 24-26, 2021, in conjunction with OATSCON21. Software developers who can help build apps, tools and integrations, and pork industry stakeholders who can provide advice, ideas and direction are encouraged to participate in this free event.

“This event is a great opportunity for coders interested in agriculture to build their reputations as great developers while also helping to solve a real problem with open source software,” says Aaron Ault, Senior Research Engineer for Purdue OATS.

This is the first of a three-part hackathon series intended to solve problems for the pork industry through creating lasting code that will improve data flow and processes through open source interoperability. Part 1: Shipping will focus on modeling an Advance Ship Notification (ASN) for pigs via the Trellis API Framework as a two-way, real-time communication channel coordinating a farmer shipping pigs, a trucker hauling pigs and a processor receiving pigs. More information can be found on the event website, where resources for developers will also be posted as the event approaches.

The Great Pork Hackathon Series is the result of work undertaken by Farm Foundation, Purdue OATS and other partners to identify pain points for people on the ground in the pork industry regarding open sharing of data across the full supply chain. “Barriers to data interoperability aren’t unique to pork,” says Martha King, Vice President of Programs and Projects for Farm Foundation. “We are grateful that stakeholders from the pork industry stepped up to work with us and OATS to give us a great starting point to determine the best ways to create practical solutions for making data more shareable and useful in that space. Our ultimate goal is create a blueprint to improve data interoperability in all sectors of agriculture.” This event is being held virtually and is sponsored by the National Pork Board, Centricity and other partners. Registration is required at https://farmfoundation.swoogo.com/porkhackathon1.

The post Coders and Observers Invited to Hackathon to Address Data Interoperability in the Pork Industry appeared first on Farm Foundation.

]]>
Farm Foundation® Announces Next Forum: Creating Supply Chain Agility through Data Innovation https://www.farmfoundation.org/2020/10/29/farm-foundation-announces-next-forum-creating-supply-chain-agility-through-data-innovation/ Thu, 29 Oct 2020 20:59:00 +0000 https://www.farmfoundation.org/?p=6358 Registration is now open for November 17 virtual event OAK BROOK, Ill.—Farm Foundation®, an accelerator of practical solutions for agriculture,...

The post Farm Foundation® Announces Next Forum: Creating Supply Chain Agility through Data Innovation appeared first on Farm Foundation.

]]>
Registration is now open for November 17 virtual event

OAK BROOK, Ill.—Farm Foundation®, an accelerator of practical solutions for agriculture, will host its latest Forum, Creating Supply Chain Agility through Data Innovation, on Tuesday, November 17, 2020 from 10:00 a.m. to noon CST.

This Forum will examine the power of data to improve cooperation and agility throughout the agricultural supply chain, from farm gate to food service. Speakers will examine current challenges, such as barriers to efficiency and traceability, and the complexity of capturing and leveraging data across ag production systems. They will also discuss opportunities for the future, outlining a roadmap for suppliers and buyers to work together to create a more nimble and effective supply chain.

Attendees will hear from these experts:

  • Larkin Martin—CEO, Martin Farm
  • Ed Treacy—Vice President of Supply Chain and Sustainability, Produce Marketing Association
  • Ranveer Chandra—Chief Scientist, Microsoft Azure Global; Lead, FarmBeats Project

“Data affects every stage of the food and ag value chain,” said Shari Rogge-Fidler, Farm Foundation President and CEO. “On the heels of the supply and demand challenges of COVID-19, the potential for the improved use of that data to impact all stakeholders is a very timely topic.”

This event is being held virtually. There is no cost to attend, but registration is required. Professionals throughout the agricultural supply chain are encouraged to attend, including farmers and ranchers, packers and processors, distribution and logistics companies, food service providers and technology providers. Attendees may register at farmfoundation.org.

The post Farm Foundation® Announces Next Forum: Creating Supply Chain Agility through Data Innovation appeared first on Farm Foundation.

]]>