python – Groupby Count in pandas user-defined time periods

I have a data frame like:

import date and time as dt
import pandas like pd

s = pd.Serie (
rank (8),
pd.to_datetime
        [
            '20130101 10:34',
            '20130101 10:34:08', 
            '20130101 10:34:08', 
            '20130101 10:34:15', 
            '20130101 10:34:28', 
            '20130101 10:34:54',
            '20130101 10:34:55',
            '20130101 10:35:12'
        ]
    )
)
df = s.to_frame ()
df = df.reset_index ()
df = df.rename (columns =
{
0: & # 39; value & # 39 ;,
& # 39; Index & # 39 ;: & # 39; start & # 39;
}
)
df['ID'] = [1,2,1,2,1,2,1,2]

sec = dt.timedelta (seconds = 30)

df['end'] = df['start'].map (lambda t: t + sec)

.

    df


start val end ID
0 2013-01-01 10:34:00 0 1 2013-01-01 10:34:30
1 2013-01-01 10:34:08 1 2 2013-01-01 10:34:38
2 2013-01-01 10:34:08 2 1 2013-01-01 10:34:38
3 2013-01-01 10:34:15 3 2 2013-01-01 10:34:45
4 2013-01-01 10:34:28 4 1 2013-01-01 10:34:58
5 2013-01-01 10:34:54 5 2 2013-01-01 10:35:24
6 2013-01-01 10:34:55 6 1 2013-01-01 10:35:25
7 2013-01-01 10:35:12 7 2 2013-01-01 10:35:42

I have to add the values ​​of each ID for all rows between start and end timestamps.
To be precise, my result must have this meaning:

p_ = []
#CICLE IS a problem
for row in range (len (df)):
p_.append (
#Using LOC IS A PROBLEM
df.loc

The
(df['start'] > = df['start'][row]) &
(df['start'] <= df['end'][row]) &
(df['ID']    == df['ID'][row])
]
        ['value']
.sum()
)
df
start val Final ID sum_of_values ​​for_ID_in_time_period
0 2013-01-01 10:34:00 0 1 2013-01-01 10:34:30 6
1 2013-01-01 10:34:08 1 2 2013-01-01 10:34:38 ​​4
2 2013-01-01 10:34:08 2 1 2013-01-01 10:34:38 ​​6
3 2013-01-01 10:34:15 3 2 2013-01-01 10:34:45 3
4 2013-01-01 10:34:28 4 1 2013-01-01 10:34:58 10
5 2013-01-01 10:34:54 5 2 2013-01-01 10:35:24 12
6 2013-01-01 10:34:55 6 1 2013-01-01 10:35:25 6
7 2013-01-01 10:35:12 7 2 2013-01-01 10:35:42 7

Instead of for cycle and the loc I would like to ask for help to transform this problem into some kind of group by, Map Solution because my true data set barely fits in the memory and I have to find something faster.
I have tried to use:

df.groupby
    [
        df.start.map(lambda t: t.minute),
        'ID'
    ]
)

The[[['value']]
.sum()

but this transforms my result into something that does not depend on the final column.

                                        value
Start ID
34 1 12
2 9
35 2 7