Micro Core To Core

The focus of MCTC is a configurable data buffer shipping its content to the peer core. Within the buffer, partitions can be occupied and applied for communication. Typical partitions are network process data, acyclic data exchange by Remote Procedure Call (RPC) or user specific process data.

Based on the configured protocols, GOAL initializes partitions for network process data at the beginning of the data buffer and a partition for RPC at the end. Between them, the user can create own partitions with the MI Data Mapper (DM). This MI is used by MCTC for handling the content of the data buffer.

The visualization of a MCTC data buffer map is exemplary shown in figure . The label of the white and yellow partitions applies to the entire document.

  • White: unused partition

  • yellow: RPC partition

  • Other colors: described in text

Receiving data is done by a second MCTC data buffer. GOAL_MI_MCTC_DIR_PEER_TO always labels the transmission and GOAL_MI_MCTC_DIR_PEER_FROM the receiving directory of a core.

Both cores setting up their MCTC data buffers by DM individually. Placement and size of the GOAL_MI_MCTC_DIR_PEER_TO partitions must be equal to the GOAL_MI_MCTC_DIR_PEER_FROM partitions on the peer core.

An invalid configuration is exemplary visualized in figure . The placement of the partition cyclic data 2 on CC doesn’t match to AC. The data read by CC will not be the same as written by AC.

The user must ensure that the setup of his partitions matches on both cores.

Asynchronous reading and writing of data is possible.

The originators of the partitions can update the data independently. DM makes sure to provide a merged buffer with the latest data to MCTC. This snapshot is transferred to the peer core immediately automatically.

On arrival of a new data buffer on a core, GOAL can inform the owner of the registered partition by a callback or provides the data for manual reading (polling).

The figure above shows the data transmission for one direction at the moment t1, t2 and t3. While all cyclic partitions are exchanged on every transfer keeping the latest values, RPC is only updated when a RPC function is called.

In summary, MCTC provides:

  • configurable partitions for exchanging cyclic and non-cyclic data

  • optional callback for incoming or outgoing messages

  • simple data mapping on data buffer

Description of MCTC example application

GOAL provides a simple MCTC application located in appl/00410_goal/mctc. It sums up the basic handling of initialization, transmission and receiving of user specific process data. The application shows both sides of the communication.

For demonstration, a counter and a timestamp are exchanged exemplary.

GOAL_TARGET_PACKED_STRUCT_PRE typedef GOAL_TARGET_PACKED_PRE struct { uint64_t cnt; /**< message counter */ GOAL_TIMESTAMP_T ts; /**< message timestamp */ } GOAL_TARGET_PACKED APPL_MSG_T; GOAL_TARGET_PACKED_STRUCT_POST

The cores evaluate these process data, by reading data cyclic, saving them and comparing to newer one. An error is logged in case of inconsistency.

Initialization

A partition on the data buffer has to be registered by goal_miDmPartIdxReg() or goal_miDmPartReg().

/* register MI DM partitions */ res = goal_miDmPartReg(GOAL_MI_MCTC_DIR_PEER_FROM, GOAL_ID_APPL, &mMiDmRead, sizeof(APPL_MSG_T)); if (GOAL_RES_ERR(res)) { goal_logErr("failed to register cyclic read part"); return res; } res = goal_miDmPartReg(GOAL_MI_MCTC_DIR_PEER_TO, GOAL_ID_APPL, &mMiDmWrite, sizeof(APPL_MSG_T)); if (GOAL_RES_ERR(res)) { goal_logErr("failed to register cyclic write part"); return res; }

Next to the direction of the data, described by GOAL_MI_MCTC_DIR_PEER_FROM and GOAL_MI_MCTC_DIR_PEER_TO, multiple partitions can be grouped by a group ID. At the upper code example, this is GOAL_ID_APPL.

With the help of the returned MI DM handle mMiDmRead and mMiDmWrite, data exchange is possible.

Please note, to use only goal_miDmPartIdxReg() or goal_miDmPartReg() for registration of multiple partitions. A mix of both function is not recommended.

The smoothest way of initialization a e.g. read partitions would be to

  • Core 1: register a new partition by goal_miDmPartReg() to GOAL_MI_MCTC_DIR_PEER_FROM

  • Core 1: get the position of the created partition by goal_miDmPartIdxGet()

  • Core 1: transfer the index ID, and data length to the other core by RPC

  • Core 2: register a partition for receiving the data by goal_miDmPartIdxReg() based on the passed information

Writing data

After MCTC is fully initialized, writing process data to the peer core is possible by updating the configured partition content.

GOAL provides several functions for this. With the help of goal_miDmSingleWrite(), a single partition is updated on the data buffer.

/* write input partition data */ res = goal_miDmSingleWrite(&mMiDmWrite, (uint8_t *) &msgData, sizeof(APPL_MSG_T));

If several process data has to be written simultaneously, a defined group can be locked by goal_miDmGroupWriteStart(). Afterwards, the data of several associated MI DM handles can be updated with goal_miDmGroupPartWrite(), followed unlocking the group with the help of goal_miDmGroupWriteEnd().

Another option is to retrieve the write buffer directly by goal_miDmGroupWriteBufGet(). Locking the Group is still necessary, when not in the group callback. The following code segment shows exemplary the required steps for writing data from pData to write partition with the help of goal_miDmGroupWriteBufGet().

Receiving data

After MCTC is fully initialized, receiving process data of the peer core is possible.

GOAL provides two functions for this. With the help of goal_miDmSingleRead(), the content of a single partition is extracted.

If several process data has to be read simultaneously, a defined group can be locked by goal_miDmGroupReadStart(). Afterwards, the data of several associated partitions can be read with goal_miDmGroupPartRead(), followed unlocking the group with the help of goal_miDmGroupReadEnd().

Callbacks

If manual polling of new data is not desired the DM provides a function for callback registration as exemplary shown below.

Setting a read callback, informs the application if any part of the named group gets new data. Reading the partitions out of the data buffer as mentioned in section Receiving data is still necessary.

Defines and prototypes of MCTC and DM

The following table lists the MCTC and DM specific defines.

Table: defines of MCTC and DM

define

description

define

description

GOAL_MI_MCTC_DIR_PEER_TO

data direction ID: transmission to peer core

GOAL_MI_MCTC_DIR_PEER_FROM

data direction ID: receiving from peer core

GOAL_MI_DM_CB_READ

callback type reading

GOAL_MI_DM_CB_WRITE

callback type writing

 

Code: prototype of DM callback function

Code: prototype of GOAL_MI_MCTC_CB_RESET_T function

Code: prototype of the GOAL_MI_MCTC_CB_TIMEOUT_RX_T function is