1. General Information
1.1 Changelog
...
Date
...
Rev
...
Author
...
Comment
...
2021-01-25
...
1.5
...
maz
...
added goal_miDmPartStatusGet
...
...
...
...
rewrite description of goal_miDmGroupReadBufGet
...
2020-10-12
...
1.4
...
mhr
...
added goal_miDmGroupReadBufGet
...
2020-10-09
...
1.3
...
maz
...
added warning for goal_miDmGroupWriteBufGet
...
2020-09-29
...
1.2
...
bit
...
added function goal_miMctcOpen
...
2020-09-09
...
1.1
...
maz
...
rewrite DM description
...
...
...
...
added functions for RPC timeout manipulation
...
2019-10-10
...
1.0
...
maz
...
adding description of RPC,
...
...
...
...
reorganization of chapters and sections,
...
...
...
...
minor text updates
...
...
...
...
updating GOAL version, copyright
...
2018-12-06
...
0.1
...
maz
...
Initial revision
2 Abbreviations
list of abbreviations
...
abbreviation
...
description
...
AC
...
Application Core
...
CC
...
Communication Core
...
DM
...
Data Mapper
...
GOAL
...
Generic Open Abstraction Layer
...
MA
...
Media Adapter
...
MCTC
...
Micro Core to Core
...
MI
...
Media Interface
...
RPC
...
Remote Procedure Call
3 Introduction
The Generic Open Abstraction Layer (GOAL) offers a communication for exchanging data between two cores. This feature is called Micro Core To Core (MCTC).
In general, this is required for splitting up the functionality of industrial network communication and application to separated cores. The one, running the network protocol is called Communication Core (CC), while the other one is named Application Core (AC).
MCTC works independently of the physical layer. The connection to a low level driver, e.g. DRAM or SPI is realized by GOAL Media Adapters (MA) and GOAL Media Interfaces (MI).
...
Figure shows a simple layer structure of MCTC in GOAL and its reference to lower and top layer.
4 Micro Core To Core
The focus of MCTC is a configurable data buffer shipping its content to the peer core. Within the buffer, partitions can be occupied and applied for communication. Typical partitions are network process data, acyclic data exchange by Remote Procedure Call (RPC) or user specific process data.
Based on the configured protocols, GOAL initializes partitions for network process data at the beginning of the data buffer and a partition for RPC at the end. Between them, the user can create own partitions with the MI Data Mapper (DM). This MI is used by MCTC for handling the content of the data buffer.
The visualization of a MCTC data buffer map is exemplary shown in figure . The label of the white and yellow partitions applies to the entire document.
White: unused partition
yellow: RPC partition
Other colors: described in text
...
Receiving data is done by a second MCTC data buffer. GOAL_MI_MCTC_DIR_PEER_TO always labels the transmission and GOAL_MI_MCTC_DIR_PEER_FROM the receiving directory of a core.
Both cores setting up their MCTC data buffers by DM individually. Placement and size of the GOAL_MI_MCTC_DIR_PEER_TO partitions must be equal to the GOAL_MI_MCTC_DIR_PEER_FROM partitions on the peer core.
An invalid configuration is exemplary visualized in figure . The placement of the partition cyclic data 2 on CC doesn’t match to AC. The data read by CC will not be the same as written by AC.
...
The user must ensure that the setup of his partitions matches on both cores.
Asynchronous reading and writing of data is possible.
The originators of the partitions can update the data independently. DM makes sure to provide a merged buffer with the latest data to MCTC. This snapshot is transferred to the peer core immediately automatically.
On arrival of a new data buffer on a core, GOAL can inform the owner of the registered partition by a callback or provides the data for manual reading (polling).
...
Figures shows the data transmission for one direction at the moment t1, t2 and t3. While all cyclic partitions are exchanged on every transfer keeping the latest values, RPC is only updated when a RPC function is called.
In summary, MCTC provides:
configurable partitions for exchanging cyclic and non-cyclic data
optional callback for incoming or outgoing messages
simple data mapping on data buffer
4.1 Description of MCTC example application
GOAL provides a simple MCTC application located in appl/00410_goal/mctc. It sums up the basic handling of initialization, transmission and receiving of user specific process data. The application shows both sides of the communication.
For demonstration, a counter and a timestamp are exchanged exemplary.
Code Block | ||
---|---|---|
| ||
GOAL_TARGET_PACKED_STRUCT_PRE
typedef GOAL_TARGET_PACKED_PRE struct {
uint64_t cnt; /**< message counter */
GOAL_TIMESTAMP_T ts; /**< message timestamp */
} GOAL_TARGET_PACKED APPL_MSG_T;
GOAL_TARGET_PACKED_STRUCT_POST
|
The cores evaluate these process data, by reading data cyclic, saving them and comparing to newer one. An error is logged in case of inconsistency.
4.1.1 Initialization
A partition on the data buffer has to be registered by goal_miDmPartIdxReg or goal_miDmPartReg.
Code Block | ||
---|---|---|
| ||
/* register MI DM partitions */
res = goal_miDmPartReg(GOAL_MI_MCTC_DIR_PEER_FROM, GOAL_ID_APPL, &mMiDmRead
, sizeof(APPL_MSG_T));
if (GOAL_RES_ERR(res)) {
goal_logErr("failed to register cyclic read part");
return res;
}
res = goal_miDmPartReg(GOAL_MI_MCTC_DIR_PEER_TO, GOAL_ID_APPL, &mMiDmWrite,
sizeof(APPL_MSG_T));
if (GOAL_RES_ERR(res)) {
goal_logErr("failed to register cyclic write part");
return res;
} |
Next to the direction of the data, described by GOAL_MI_MCTC_DIR_PEER_FROM and GOAL_MI_MCTC_DIR_PEER_TO, multiple partitions can be grouped by a group ID. At the upper code example, this is GOAL_ID_APPL.
With the help of the returned MI DM handle mMiDmRead and mMiDmWrite, data exchange is possible.
Please note, to use only goal_miDmPartIdxReg or goal_miDmPartReg for registration of multiple partitions. A mix of both function is not recommended.
The smoothest way of initialization a e.g. read partitions would be to
Core 1: register a new partition by goal_miDmPartReg to GOAL_MI_MCTC_DIR_PEER_FROM
Core 1: get the position of the created partition by goal_miDmPartIdxGet
Core 1: transfer the index ID, and data length to the other core by RPC
Core 2: register a partition for receiving the data by goal_miDmPartIdxReg based on the passed information
4.1.2 Writing data
After MCTC is fully initialized, writing process data to the peer core is possible by updating the configured partition content.
GOAL provides several functions for this. With the help of goal_miDmSingleWrite, a single partition is updated on the data buffer.
Code Block | ||
---|---|---|
| ||
/* write input partition data */
res = goal_miDmSingleWrite(&mMiDmWrite, (uint8_t *) &msgData
, sizeof(APPL_MSG_T)); |
If several process data has to be written simultaneously, a defined group can be locked by goal_miDmGroupWriteStart. Afterwards, the data of several associated MI DM handles can be updated with goal_miDmGroupPartWrite, followed unlocking the group with the help of goal_miDmGroupWriteEnd.
Code Block | ||
---|---|---|
| ||
/* initiate group start */
res = goal_miDmGroupWriteStart(pMiMctc->pGroup);
if (GOAL_RES_ERR(res)) {
return res;
}
/* update all group elements */
for (cnt = 0; GOAL_RES_OK(res) && (cnt < APPL_PROCESS_ELEM_NUM); cnt ++) {
res = goal_miDmGroupPartWrite(&mMiDmWrite[cnt], &pData[cnt], len);
}
if (GOAL_RES_ERR(res)) {
goal_logErr("invalid updating of group");
}
/* end grouped write */
res = goal_miDmGroupWriteEnd(pMiMctc->pGroup); |
Another option is to retrieve the write buffer directly by goal_miDmGroupWriteBufGet. Locking the Group is still necessary, when not in the group callback. The following code segment shows exemplary the required steps for writing data from pData to write partition with the help of goal_miDmGroupWriteBufGet.
Code Block | ||
---|---|---|
| ||
/* initiate group start */
res = goal_miDmGroupWriteStart(pMiMctc->pGroup);
if (GOAL_RES_ERR(res)) {
return res;
}
/* retrieve direct write buffer */
res = goal_miDmGroupWriteBufGet(&pBuf, &len, mMiDmWrite);
if (GOAL_RES_OK(res)) {
GOAL_MEMCPY(pBuf, pData, len);
}
/* end grouped write */
res = goal_miDmGroupWriteEnd(pMiMctc->pGroup); |
4.1.3 Receiving data
After MCTC is fully initialized, receiving process data of the peer core is possible.
GOAL provides two functions for this. With the help of goal_miDmSingleRead, the content of a single partition is extracted.
Code Block | ||
---|---|---|
| ||
/* read output partition data */
res = goal_miDmSingleRead(&mMiDmRead, (uint8_t *) &msgData); |
If several process data has to be read simultaneously, a defined group can be locked by goal_miDmGroupReadStart. Afterwards, the data of several associated partitions can be read with goal_miDmGroupPartRead, followed unlocking the group with the help of goal_miDmGroupReadEnd.
Code Block | ||
---|---|---|
| ||
/* initiate grouped read */
res = goal_miDmGroupReadStart(pMiMctc->pGroup);
if (GOAL_RES_ERR(res)) {
return res;
}
/* extract the content of the group elements */
for (cnt = 0; GOAL_RES_OK(res) && (cnt < APPL_PROCESS_ELEM_NUM); cnt ++) {
res = goal_miDmGroupPartRead(&mMiDmWrite[cnt], &pData[cnt]);
}
if (GOAL_RES_ERR(res)) {
goal_logErr("invalid updating of group");
}
/* finish grouped read */
goal_miDmGroupReadEnd(pMiMctc->pGroup); |
4.1.4 Callbacks
If manual polling of new data is not desired the DM provides a function for callback registration as exemplary shown below.
Code Block | ||
---|---|---|
| ||
/* register read callback for cyclic data */
res = goal_miDmCbReg(NULL, pMiMctc->pGroup, GOAL_MI_DM_CB_READ,
appl_dmCbCyclicRd, NULL); |
Setting a read callback, informs the application if any part of the named group gets new data. Reading the partitions out of the data buffer as mentioned in section Receiving data is still necessary.
4.2 Defines and prototypes of MCTC and DM
The following table lists the MCTC and DM specific defines.
defines of MCTC and DM
...
define
...
description
...
GOAL_MI_MCTC_DIR_PEER_TO
...
data direction ID: transmission to peer core
...
GOAL_MI_MCTC_DIR_PEER_FROM
...
data direction ID: receiving from peer core
...
GOAL_MI_DM_CB_READ
...
callback type reading
...
GOAL_MI_DM_CB_WRITE
...
callback type writing
The prototype of DM callback function is:
Code Block | ||
---|---|---|
| ||
/**< MI DM Callback Function */
typedef GOAL_STATUS_T (* GOAL_MI_DM_CB_FUNC_T)(
struct GOAL_MI_DM_GROUP_T *pGroup, /**< [in] group handle */
void *pPriv /**< [in] private data */
); |
The prototype of the GOAL_MI_MCTC_CB_RESET_T function is
Code Block | ||
---|---|---|
| ||
typedef GOAL_STATUS_T (* GOAL_MI_MCTC_CB_RESET_T)(
struct GOAL_MI_MCTC_INST_T *pMiMctc /**< MI MCTC instance */
);
The prototype of the GOAL_MI_MCTC_CB_TIMEOUT_RX_T function is
typedef GOAL_STATUS_T (* GOAL_MI_MCTC_CB_TIMEOUT_RX_T)(
struct GOAL_MI_MCTC_INST_T *pMiMctc /**< MI MCTC instance */
); |
5 Application Programming Interface of cyclic data
This chapter lists the API functions that are provided by GOAL’s MCTC. Next to a short description and the presentation of the function arguments additional notes are given.
Handling these arguments is simplified in the code examples.
5.1 MCTC API
5.1.1 goal_miMctcOpen - Open and activate mctc instance
This function opens an mctc instance. This enables sending and receiving.
Returns a GOAL_STATUS_T status.
goal_miMctcOpen arguments
...
Arguments
...
Description
...
GOAL_MI_MCTC_INST_T **ppInst
...
return handle
...
unsigned int id
...
id of instance
Code Block |
---|
res = goal_miMctcOpen(&pInst, GOAL_ID_DEFAULT); |
Remark: For compatibity reasons this function is also available as goal_rpcSetupChannel as part of the RPC api.
5.1.2 goal_miMctcCbRegReset - Register Reset Callback
This function registers a callback, triggered by MCTC when a reset is required. Multiple callbacks can be set, called one by another in order of registration.
Returns a GOAL_STATUS_T status.
goal_miMctcCbRegReset arguments
...
Arguments
...
Description
...
GOAL_MI_MCTC_CB_RESET_T func
...
callback function
Code Block |
---|
res = goal_miMctcCbRegToutRx(appl_syncReset); |
5.1.3 goal_miMctcCbRegToutRx - Register RX Timeout Callback
This function registers a callback, triggered by MCTC on receive timeout. Multiple callbacks can be set, called one by another in order of registration.
Returns a GOAL_STATUS_T status.
goal_miMctcCbRegToutRx arguments
...
Arguments
...
Description
...
GOAL_MI_MCTC_CB_TIMEOUT_RX_T func
...
callback function
Code Block |
---|
res = goal_miMctcCbRegToutRx(appl_syncTimeout); |
5.1.4 goal_miMctcStatusGet - get the MCTC status
This function gets the status of the MCTC instance.
Returns GOAL_OK if MCTC is ready.
goal_miMctcStatusGet arguments
...
Arguments
...
Description
...
unsigned int id
...
instance id
Code Block | ||
---|---|---|
| ||
res = goal_miMctcStatusGet(GOAL_ID_DEFAULT);
if (GOAL_RES_ERR(res)){
return;
} |
5.1.5 goal_miMctcInstGetById - Instance Get By Id
Based on the instance ID, this function returns the assigned MCTC handle.
Returns a GOAL_STATUS_T status.
goal_miMctcInstGetById arguments
...
Arguments
...
Description
...
GOAL_MI_MCTC_INST_T **ppMiMctcInst
...
[out] MCTC MI instance
...
uint32_t id
...
MCTC MI instance id
Code Block |
---|
res = goal_miMctcInstGetById(&pMiMctcInst, GOAL_ID_DEFAULT); |
5.1.6 goal_miMctcCfgTout - configure RPC timeout
This function sets the RPC timeout before creating a MCTC instance.
Returns a GOAL_STATUS_T status.
goal_miMctcCfgTout arguments
...
Arguments
...
Description
...
uint32_t timeout
...
MCTC RPC timeout
A value of 0 disables the RPC timeout. This should only be uses for debugging. This function needs to be called before goal_miMctcOpen.
Code Block | ||
---|---|---|
| ||
/* disable RPC timeout */
res = goal_miMctcCfgTout(0); |
5.2 DM API
5.2.1 goal_miDmPartReg - Register Data Partition
This function attaches a new partition to existing one on the data buffer, named by DM handle ID. The partition is assigned to the named group.
GOAL logs the start position and length of a new partition to facilitate a review of data buffer configuration.
Note: Groups can only be created in the setup phase. If the memory allocation is already closed the call to goal_miDmGroupNew will fail if the group doesn’t exist.
Returns a GOAL_STATUS_T status.
goal_miDmPartReg arguments
...
Arguments
...
Description
...
uint32_t idMiDm
...
DM handle id
...
GOAL_ID_T idGroup
...
group id
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint32_t lenPart
...
partition length
Code Block | ||
---|---|---|
| ||
/* register MI DM partitions */
res = goal_miDmPartReg(GOAL_MI_MCTC_DIR_PEER_FROM, GOAL_ID_APPL,
&mMiDmRead, sizeof(APPL_MSG_T)); |
5.2.2 goal_miDmPartIdxReg - Register Data Partition
This function occupies a partition on the data buffer, at the index position. If it is already in use, an error is raised. The partition is assigned to the named group.
GOAL logs the start position and length of a new partition to facilitate a review of data buffer configuration.
Note: Groups can only be created in the setup phase. If the memory allocation is already closed the call to goal_miDmGroupNew will fail if the group doesn’t exist.
Returns a GOAL_STATUS_T status.
goal_miDmPartIdxReg arguments
...
Arguments
...
Description
...
uint32_t idMiDm
...
DM handle id
...
GOAL_ID_T idGroup
...
group id
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint32_t lenPart
...
partition length
...
uint32_t idx
...
partition index
Code Block | ||
---|---|---|
| ||
/* register MI DM partitions */
res = goal_miDmPartIdxReg(GOAL_MI_MCTC_DIR_PEER_FROM, GOAL_ID_APPL,
&mMiDmRead, sizeof(APPL_MSG_T), 0); |
5.2.3 goal_miDmPartSizeGet - Get Partition Size
This function returns the size of a registered partition. If the partition handle is invalid or not active, 0 is returned.
goal_miDmPartSizeGet arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
Code Block |
---|
res = goal_miDmPartSizeGet(&mMiDmRead); |
5.2.4 goal_miDmPartStatusGet - Get Partition Status
This function returns the status of a registered partition. If the partition handle is valid and active, GOAL_TRUE is returned.
goal_miDmPartStatusGet arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
Code Block | ||
---|---|---|
| ||
GOAL_BOOL_T status; /* partition status */
status = goal_miDmPartStatusGet(&mMiDmRead); |
5.2.5 goal_miDmSingleWrite - Single Write
Writing data to a single partition is done by this function.
Returns a GOAL_STATUS_T status.
goal_miDmSingleWrite arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint8_t *pBuf
...
data source
...
unsigned int len
...
write length
Code Block | ||
---|---|---|
| ||
res = goal_miDmSingleWrite(&mMiDmWrite, (uint8_t *) &msgData,
sizeof(APPL_MSG_T)); |
5.2.6 goal_miDmGroupPartWrite - Partition Write
Writing data to a partition is done by this function. The group has to be locked by goal_miDmGroupWriteStart before and released by goal_miDmGroupWriteEnd afterwards.
Warning: Writing to a single partitions write buffer isn’t thread safe if not locked by goal_miDmGroupWriteStart and goal_miDmGroupWriteEnd.
Returns a GOAL_STATUS_T status.
goal_miDmGroupPartWrite arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint8_t *pBuf
...
data source
...
unsigned int len
...
write length
Code Block |
---|
res = goal_miDmGroupPartWrite(&mMiDmWrite[cnt], &pData[cnt], len); |
5.2.7 goal_miDmGroupWriteBufGet - Retrieve Direct Write Buffer
This function retrieve direct write buffer for access. The group has to be locked by goal_miDmGroupWriteStart before and released by goal_miDmGroupWriteEnd afterwards.
Warning: Writing to a single partitions write buffer isn’t thread safe if not locked by goal_miDmGroupWriteStart and goal_miDmGroupWriteEnd.
The content of provided pointer is uncertain. It is neither empty nor holding old data. Ensure to write the complete buffer, invalid data will be exchanged otherwise.
Returns a GOAL_STATUS_T status.
goal_miDmGroupWriteBufGet arguments
...
Arguments
...
Description
...
uint8_t **ppBuf
...
[out] partition pointer
...
uint32_t *pLen
...
partition length
...
GOAL_MI_DM_PART_T *pPart
...
partition data
Code Block |
---|
res = goal_miDmGroupWriteBufGet(&pBuf, &len, mMiDmWrite); |
5.2.8 goal_miDmGroupWriteStart - Group Write Start
This function locks a group of partitions on the data buffer for updating them. Unlocking the group is done by goal_miDmGroupWriteEnd.
Returns a GOAL_STATUS_T status.
goal_miDmGroupWriteStart arguments
...
Arguments
...
Description
...
GOAL_MI_DM_GROUP_T *pGroup
...
partition group
Code Block |
---|
res = goal_miDmGroupWriteStart(pMiMctc->pGroup); |
5.2.9 goal_miDmGroupWriteEnd - Group Write Start
This function unlocks a group of partitions on the data buffer. Locking the group is done by goal_miDmGroupWriteStart.
Returns a GOAL_STATUS_T status.
goal_miDmGroupWriteEnd arguments
...
Arguments
...
Description
...
GOAL_MI_DM_GROUP_T *pGroup
...
partition group
Code Block |
---|
res = goal_miDmGroupWriteEnd(pMiMctc->pGroup); |
5.2.10 goal_miDmSingleRead - Single Read
Reading the data of a single partition is done by this function.
Returns a GOAL_STATUS_T status.
goal_miDmSingleRead arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint8_t *pBuf
...
buffer reference
Code Block |
---|
res = goal_miDmSingleRead(&mMiDmRead, (uint8_t *) &msgData); |
5.2.11 goal_miDmSingleReadBufGet - Retrieve Direct Read Buffer
This function retrieve direct read buffer for access.
Returns a GOAL_STATUS_T status.
goal_miDmSingleReadBufGet arguments
...
Arguments
...
Description
...
uint8_t **ppBuf
...
[out] partition pointer
...
uint32_t *pLen
...
partition length
...
GOAL_MI_DM_PART_T *pPart
...
partition data
Code Block |
---|
res = goal_miDmSingleReadBufGet(&pBuf, &len, &mMiDmRead); |
5.2.12 goal_miDmGroupPartRead - Get Group Handle by Index
Reading data from a partition is done by this function. The group has to be locked by goal_miDmGroupReadStart before and released by goal_miDmGroupReadEnd afterwards.
Warning: Reading from a single partition isn’t thread safe if not locked by goal_miDmGroupReadStart and goal_miDmGroupReadEnd.
Returns a GOAL_STATUS_T status.
goal_miDmGroupPartRead arguments
...
Arguments
...
Description
...
GOAL_MI_DM_PART_T *pPart
...
partition data
...
uint8_t *pBuf
...
data source
Code Block |
---|
res = goal_miDmGroupPartRead(&pGroup, &pData[cnt]); |
5.2.13 goal_miDmGroupReadBufGet - Retrieve Direct Read Buffer
This function retrieve direct read buffer for access. The group has to be locked by goal_miDmGroupReadStart before and released by goal_miDmGroupReadEnd afterwards.
Warning: Reading from a single partition buffer isn’t thread safe if not locked by goal_miDmGroupReadStart and goal_miDmGroupReadEnd.
Multiple calls of goal_miDmGroupReadBufGet between the same goal_miDmGroupReadStart and goal_miDmGroupReadEnd correspond to the same process data image, i.e. the read data are consistent more than one read operations.
Returns a GOAL_STATUS_T status.
goal_miDmGroupReadBufGet arguments
...
Arguments
...
Description
...
uint8_t **ppBuf
...
[out] partition pointer
...
uint32_t *pLen
...
partition length
...
GOAL_MI_DM_PART_T *pPart
...
partition data
Code Block |
---|
res = goal_miDmGroupReadBufGet(&pBuf, &len, mMiDmRead); |
5.2.14 goal_miDmGroupReadStart - Group Read Start
This function locks a group of partitions on the data buffer for reading them. Unlocking the group is done by goal_miDmGroupReadEnd.
Returns a GOAL_STATUS_T status.
goal_miDmGroupReadStart arguments
...
Arguments
...
Description
...
GOAL_MI_DM_GROUP_T *pGroup
...
partition group
Code Block |
---|
res = goal_miDmGroupReadStart(pMiMctc->pGroup); |
5.2.15 goal_miDmGroupReadEnd - Group Read Start
This function unlocks a group of partitions on the data buffer. Locking the group is done by goal_miDmGroupReadStart.
Returns a GOAL_STATUS_T status.
goal_miDmGroupReadEnd arguments
...
Arguments
...
Description
...
GOAL_MI_DM_GROUP_T *pGroup
...
partition group
Code Block |
---|
res = goal_miDmGroupReadEnd(pMiMctc->pGroup); |
5.2.16 goal_miDmGroupGetByIdx - Get Group Handle by Index
This function is used to get the handle of a group.
Returns a GOAL_STATUS_T status.
goal_miDmGroupGetByIdx arguments
...
Arguments
...
Description
...
GOAL_MI_DM_GROUP_T **ppGroup
...
[out] group handle
...
uint32_t idMiDm
...
MI DM id
...
GOAL_ID_T idGroup
...
group id
Code Block | ||
---|---|---|
| ||
res = goal_miDmGroupGetByIdx(&pGroup, GOAL_MI_MCTC_DIR_PEER_FROM
, GOAL_ID_APPL); |
5.2.17 goal_miDmInstGetByPart - Get Instance By Part Handle
This function is used to get the DM handle of a partition handle.
Returns a GOAL_STATUS_T status.
goal_miDmInstGetByPart arguments
...
Arguments
...
Description
...
GOAL_MI_DM_T **ppDm
...
[out] MI DM handle
...
GOAL_MI_DM_PART_T *pPart
...
part handle
Code Block |
---|
res = goal_miDmInstGetByPart(&pDm, mMiDmRead); |
5.2.18 goal_miDmCbReg - Register Callback
This function is used to register a callback, informing the application about any new incoming or outgoing data. Setting a MI DM handle callback is preferred over setting group callbacks. By keeping the MI DM handle as NULL, the callback is registered for a group.
Returns a GOAL_STATUS_T status.
goal_miDmCbReg arguments
...
Arguments
...
Description
...
GOAL_MI_DM_T *pMiDm
...
[in] MI DM handle
...
GOAL_MI_DM_GROUP_T *pGroup
...
[in] MI DM group handle
...
uint32_t typeCb
...
callback type
...
GOAL_MI_DM_CB_FUNC_T funcCb
...
callback function
...
void *pPriv
...
callback private data
Code Block | ||
---|---|---|
| ||
res = goal_miDmCbReg(NULL, pMiMctc->pGroup, GOAL_MI_DM_CB_READ,
appl_dmCbCyclicRd, NULL); |
5.2.19 goal_miDmGroupPartsRemove - Remove all mapped partitions
This function removes the mapped partitions of a named group. Returns a GOAL_STATUS_T status.
goal_miDmGroupPartsRemove arguments
...
Arguments
...
Description
...
uint32_t idMiDm
...
DM handle ID
...
GOAL_ID_T idGroup
...
Group ID
Code Block | ||
---|---|---|
| ||
/* remove all partitions from group */
res = goal_miDmGroupPartsRemove(APPL_DM_ID, APPL_DM_ID_GROUP); |
6 Remote Procedure Call
Remote Procedure Call (RPC) is the method of executing a functionality on a peer core. The core sending the request is called client and the core providing the procedure is called server.
RPC is an upper layer function of Micro Core to Core (MCTC) used to exchange low priority or non-time-critical data.
GOAL fragments big data automatically and observes the exchange of packets by sequence counter. Missing fragments are retransmitted multiple times to ensure that the peer core receives the complete message.
6.1 Description of RPC example application
GOAL provides a simple RPC application located in appl/00410_goal/rpc. It shows the basic handling of initialization, transmission and receiving. The application merges the client and server side of a communication.
For demonstration, the variable varA and variable varB are transmitted to the server for multiplication. The client receives the result as variable varC and checks it.
The code examples of the following sections refer to this simple RPC application.
6.1.1 RPC as client
First of all, the client creates a RPC channel handle by goal_rpcSetupChannel during e.g. appl_şetup.
Code Block | ||
---|---|---|
| ||
/* setup a channel for RPC init */
res = goal_rpcSetupChannel(&pHdlRpcChn, GOAL_ID_DEFAULT); |
Afterwards a new RPC (transmission) handle can be requested by GOAL_RPC_NEW. It contains an empty RPC stack, which can be pushed with data by GOAL_RPC_PUSH. GOAL_RPC_USER_CALL calls the procedure at the server side by transfers the complete RPC stack to the peer core. After receiving a response, relevant data can to be poped from the RPC stack by GOAL_RPC_POP.
Finally, the requested RPC handle has to be closed by GOAL_RPC_CLOSE.
These steps are summary in goal_rpcUserFunction.
Code Block | ||
---|---|---|
| ||
static GOAL_STATUS_T goal_rpcUserFunction(
uint32_t *pVarC, /**< sum C */
uint16_t varA, /**< variable A */
uint16_t varB /**< variable B */
)
{
GOAL_STATUS_T res = GOAL_OK; /* result */
GOAL_RPC_HDL_T *pHdlRpc = GOAL_RPC_HDL_NONE; /* call handle */
/* get a new rpc handle */
GOAL_RPC_NEW();
/* push the value of Var A */
GOAL_RPC_PUSH(varA);
/* push the value of Var B */
GOAL_RPC_PUSH(varB);
/* call the procedure at the server */
GOAL_RPC_USER_CALL(GOAL_RPC_FUNC_TEST);
/* pop the value of Var B */
GOAL_RPC_POP(*pVarC, uint32_t);
/* rpc is done successfully */
GOAL_RPC_CLOSE();
return res;
} |
6.1.2 RPC as server
The server registers a callback to an user function ID by GOAL_RPC_USER_REGISTER_SERVICE.
Code Block | ||
---|---|---|
| ||
/* setup the service function */
GOAL_RPC_USER_REGISTER_SERVICE(GOAL_RPC_FUNC_TEST,
&goal_rpcUserFunctionServer); |
All received RPC messages are compared to the function ID and a module ID, triggering the callback on a match. The server has to pop the arguments from the RPC handle and calls the requested function afterwards. Ensure, that the arguments has to be poped in reversed from the RPC handle as the client has pushed them.
Pointer arguments expecting a return value, like exemplary variable varC, have to be pushed to the RPC handle for response.
The return value of the server callback function is the return value of GOAL_RPC_USER_CALL on the client side, on a successful response transfer. Otherwise, the return value on the client side is a GOAL_STATUS_T error.
Code Block | ||
---|---|---|
| ||
static GOAL_STATUS_T goal_rpcUserFunctionServer(
GOAL_RPC_HDL_T *pHdlRpc /**< RPC handle */
)
{
uint16_t varA = 0; /* variable A */
uint16_t varB = 0; /* variable B */
uint32_t varC = 0; /* result C */
GOAL_STATUS_T res = GOAL_OK; /* result */
/* pop the value of Var B */
GOAL_RPC_POP(varB, uint16_t);
/* pop the value of Var A */
GOAL_RPC_POP(varA, uint16_t);
/* call the server function */
if (GOAL_RES_OK(res)) {
res = goal_userFunction(&varC, varA, varB);
}
/* push the value of Var C */
GOAL_RPC_PUSH(varC);
return res;
} |
6.2 Defines and prototypes of RPC
The defines are located on GOAL Media Interface MCTC header at goal_media/goal_mi_mctc.h.
RPC defines
...
define
...
description
...
default value
...
GOAL_MI_MCTC_HDL_CNT
...
RPC transmission handle count
...
4
...
GOAL_MI_MCTC_XFER_SIZE
...
RPC transfer size
...
32
...
GOAL_MI_MCTC_RPC_MAX_SEQ
...
maximum sequence
...
10
...
GOAL_MI_MCTC_RPC_CNT_RESEND
...
resend buffer count
...
1
...
GOAL_MI_MCTC_RPC_RECV_TOUT
...
receive timeout (in ms)
...
10
7 Application Programming Interface of acyclic data
This chapter lists the API functions that are provided by GOAL RPC. Next to a short description and the presentation of the function arguments, possible available macros are listed. They simplifying the RPC handling by e.g. error validation, but requires specific variables names.
specific variable names of RPC macros
...
name
...
type
...
Description
...
res
...
GOAL_STATUS_T
...
result of GOAL functions
...
pHdlRpcChn
...
GOAL_RPC_HDL_CHN_T
...
RPC channel handle
...
pHdlRpc
...
GOAL_RPC_HDL_T
...
RPC handle
Handling these arguments is simplified in the code examples.
7.1 goal_rpcSetStackMax - Set the size of the RPC stack
This function configures the maximal size of the RPC stack in bytes.
As long as the RPC channel is not setup, this function can be called. The stack size can only be increased. If this function is not called, the stack size is set to GOAL_MI_MCTC_XFER_SIZE.
Returns a GOAL_STATUS_T status.
goal_rpcSetStackMax arguments
...
Arguments
...
Description
...
unsigned int size
...
byte size of the RPC stack
Code Block |
---|
res = goal_rpcSetStackMax(1000); |
7.2 goal_rpcHdlMaxSet - Set maximum RPC handle count
This function configures the maximal count of RPC handles.
As long as the RPC channel is not setup, this function can be called. The stack size can only be increased. If this function is not called, the stack size is set to GOAL_MI_MCTC_HDL_CNT.
Returns a GOAL_STATUS_T status.
goal_rpcHdlMaxSet arguments
...
Arguments
...
Description
...
unsigned int cntHdl
...
maximum handle count
Code Block |
---|
res = goal_rpcHdlMaxSet(5); |
7.3 goal_rpcStatus - Gets the status of RPC
This function checks, whether a suitable partner for communication was found and the exchange of data is possible.
Returns a GOAL_STATUS_T status.
goal_rpcStatus arguments
...
Arguments
...
Description
...
unsigned int usageId
...
usage ID
Code Block | ||
---|---|---|
| ||
res = goal_rpcStatus(GOAL_ID_MI_CTC_DEFAULT);
if (GOAL_RES_ERR(res)) {
goal_logWarn("RPC is not ready.");
return res;
} |
7.4 goal_rpcNew - Gets a RPC handle
Get an empty RPC handle from the RPC channel handle and initialize it.
Returns a GOAL_STATUS_T status.
Notes: Its recommended to use the makro GOAL_RPC_NEW, because it simplify the call by an included failure check.
goal_rpcNew arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_T **ppHdlRpc
...
[out] new RPC handle reference
...
GOAL_RPC_HDL_CHN_T *pHdlRpcChn
...
selected RPC channel handle
The following example shows the usage of goal_rpcNew.
Code Block | ||
---|---|---|
| ||
/* get a new rpc handle */
res = goal_rpcNew(&pHdlRpc, pHdlRpcChn); |
The following example shows the usage of the macro GOAL_RPC_NEW.
Code Block | ||
---|---|---|
| ||
/* get a new rpc handle */
GOAL_RPC_NEW(); |
7.5 goal_rpcClose - Close a RPC handle
This function closes the RPC handle.
The number of RPC handles is limited. Closing the handle is required by the client after he has received, and evaluated the response of the server. On server side, the handle is closed by GOAL automatically.
Returns a GOAL_STATUS_T status.
goal_rpcClose arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_T *pHdlRpc
...
RPC handle
The following example shows the usage of goal_rpcClose.
Code Block |
---|
goal_rpcClose(pHdlRpc); |
The following example shows the usage of the macro GOAL_RPC_CLOSE.
Code Block |
---|
GOAL_RPC_CLOSE(); |
7.6 goal_rpcCall - Calling the remote procedure
The RPC client calls the server by this function. It returns when receiving a response or a timeout occurs. Meanwhile, other GOAL tasks or loops are not block.
Possible RPC values has to be pushed to the RPC handle before calling this function.
The following 10 byte header is added automatically to the RPC transmission handle for a request.
RPC request header.
...
4 byte
...
4 byte
...
2 byte
...
2 byte
...
function ID
...
RPC handle ID
...
MCTC message ID
...
flags
The following 8 byte header is added automatically to the RPC transmission handle for a response.
RPC response header
...
4 byte
...
2 byte
...
2 byte
...
GOAL_STATUS_T of server
...
MCTC message ID
...
flags
Returns a GOAL_STATUS_T status.
goal_rpcCall arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_T *pHdlRpc
...
RPC handle
...
uint32_t idGoal
...
GOAL ID
...
GOAL_RPC_FUNC_ID idFct
...
ID of the server function
The following example shows the usage of goal_rpcClall.
Code Block |
---|
res = goal_rpcCall(pHdlRpc, GOAL_ID_APPL, GOAL_RPC_FUNC_TEST); |
The following example shows the usage of the macro GOAL_RPC_USER_CALL. It is recommended to use this macro in user application. The remote function should be registered by GOAL_RPC_USER_REGISTER_SERVICE on server side.
Code Block |
---|
/* call the procedure at the server */
GOAL_RPC_USER_CALL(GOAL_RPC_FUNC_TEST); |
7.7 goal_rpcRegisterService - Set a server function
Adds a the callback to the list of server functions. Incoming requests are compared by module ID and function ID selecting the correct entry. Registered functions can not be rewritten.
Returns a GOAL_STATUS_T status.
goal_rpcRegisterService arguments
...
Arguments
...
Description
...
uint32_t idGoal
...
GOAL ID
...
GOAL_RPC_FUNC_ID idFct
...
ID of the selected function
...
GOAL_RPC_FUNC_T fctServer
...
pointer of the new server function
The following example shows the usage of goal_rpcRegisterService.
Code Block | ||
---|---|---|
| ||
res = goal_rpcRegisterService(GOAL_ID_APPL, GOAL_RPC_FUNC_TEST,
&goal_rpcUserFunctionServer); |
The following example shows the usage of the macro
GOAL_RPC_USER_REGISTER_SERVICE. It is recommended to use this macro in user application. The client request should be send by GOAL_RPC_USER_CALL.
Code Block | ||
---|---|---|
| ||
/* setup the service function */
GOAL_RPC_USER_REGISTER_SERVICE(GOAL_RPC_FUNC_TEST,
&goal_rpcUserFunctionServer); |
7.8 goal_rpcSetupChannel - Setup a RPC channel
This function setups a RPC channel for transmission. The timeout of the RPC is defined by GOAL_MCTC_TIMEOUT_RX.
Returns a GOAL_STATUS_T status.
goal_rpcSetupChannel arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_CHN_T **ppHdlRpcChn
...
[out] RPC channel handle
...
unsigned int usageId
...
usage ID
The following example shows the usage of goal_rpcSetupChannel.
Code Block | ||
---|---|---|
| ||
res = goal_rpcSetupChannel(&pHdlRpcChn, GOAL_ID_APPL); |
7.9 goal_rpcArgPush - Push size x bytes to the stack
This function pushes size bytes to the transmission stack of the RPC handle. An error will be returned if the value doesn’t fit.
Pushing an argument by goal_rpcArgPush doesn’t care about the endianness. Please use GOAL_RPC_PUSH if uncertainties exist.
Returns a GOAL_STATUS_T status.
Notes:
The order of pushing arguments has to be vice versa of poping them.
The maximal argument size of the macro GOAL_RPC_PUSH is 32 bit.
Macros for pushing arrays are GOAL_RPC_PUSH_PTR or GOAL_RPC_PUSH_PTR_FGM. There will be no verification of endianness.
Pushing an array should be followed by push the length of the data. Thus, the foreign core can pop the length information and by the number of data afterwards.
goal_rpcArgPush arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_T *pHdlRpc
...
RPC handle
...
const uint8 *pData
...
data
...
unsigned int len
...
data length
The following example shows the usage of goal_rpcArgPush by pushing the 16 bit variable var.
Code Block | ||
---|---|---|
| ||
uint16_t var = 1234;
res = goal_rpcArgPush(pHdlRpc, &var, sizeof(var)); |
The following example shows the usage of the macro GOAL_RPC_PUSH by pushing the 16 bit variable var. The macro checks the status of res before transforming its argument to 32 bit big endianness and pushing it to the RPC stack.
Code Block | ||
---|---|---|
| ||
uint16_t var = 1234;
GOAL_RPC_PUSH(var); |
The macro GOAL_RPC_PUSH_PTR pushes len data to the RPC stack.
Code Block | ||
---|---|---|
| ||
uint8_t buffer[len];
/* fill buffer with data */
GOAL_RPC_PUSH_PTR(buffer, len);
if (GOAL_RES_ERR(res)) {
goal_logErr("Unable to push the complete buffer");
} |
7.10 goal_rpcArgPop - Pop size x bytes from the stack
This function pops size bytes from the receive stack of the RPC handle. An error will be returned if there are not enough bytes at the stack.
Poping an argument by goal_rpcArgPop doesn’t care about the endianness. Please use GOAL_RPC_POP if uncertainties exist.
Returns a GOAL_STATUS_T status.
Notes:
The order of poping arguments has to be vice versa of pushing them.
The maximal argument size of the macro GOAL_RPC_POP is 32 bit.
Pushing an array should be followed by push the length of the data. Thus, the foreign core can pop the length information and by the number of data afterwards.
goal_rpcArgPop arguments
...
Arguments
...
Description
...
GOAL_RPC_HDL_T *pHdlRpc
...
RPC handle
...
uint8 *pData
...
[out] data
...
unsigned int len
...
data length
The following example shows the usage of goal_rpcArgPop by poping a 16 bit variable.
Code Block | ||
---|---|---|
| ||
uint16_t var = 0;
res = goal_rpcArgPop(pHdlRpc, (uint8_t *) &var, sizeof(var)); |
The following example shows the usage of the macro GOAL_RPC_POP by poping a 16 bit variable. The macro requires the data type as second argument.
It checks the status of res before poping 32 bit big endianness and transforming the endianness and type of the data.
Code Block | ||
---|---|---|
| ||
uint16_t var = 1234;
GOAL_RPC_POP(var, uint16_t); |
The macro GOAL_RPC_POP_PTR pops len data from the RPC stack.
Code Block | ||
---|---|---|
| ||
uint16_t len = 12;
uint8_t buffer[len];
/* pop data from stack */
GOAL_RPC_POP_PTR(buffer, len);
if (GOAL_RES_ERR(res)) {
goal_logErr("Unable to pop data");
} |