Create Columns for all time dimensions
I have data like the Data_df example dataframe below. I'm wondering if there's a way to create new columns for all the time dimensions in one of the timestamp fields for example like the 'start_timestamp' field. I'd like to create new columns for the year, month, weekday, hour, minute based on the 'start_timestamp' column. I know I could code for each time dimension manually but I'm wondering if there's a way to check the timestamp and create them automatically.
Data_df:
Unnamed: 0 call_history_id calllog_id
0 16358 1210746736 ca58d850-6fe6-4673-a049-ea4a2d8d7ecf
1 16361 1210976828 c005329b-955d-4d88-98a5-1c47e6a1cb80
2 16402 1217791595 050e9b83-54c2-4c87-abdd-32225c0d3189
3 16471 1228495414 45705ed1-a8e2-4a15-8941-5b0a40b7d409
4 27906 1245173592 04e56818-04a0-4704-ac86-31c31dac2370
call_id connection_id pbx_name pbx_id extension_number
0 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
1 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
2 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
3 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
4 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
extension_id customer_id address name
0 595 2.525100e+29 14086694428 Sun Basket
1 595 2.525100e+29 13214371589 PEREZ,BRYAN
2 595 2.525100e+29 14088566290 14088566290
3 595 2.525100e+29 8059316676 Dialing
4 595 2.525100e+29 12028071151 Implementation Team
start_timestamp direction call_internal call_missed duration
0 1/8/18 19:49 I 0.0 0.0 4414.0
1 1/8/18 20:09 I 0.0 0.0 8300.0
2 1/9/18 20:31 I 0.0 0.0 14766.0
3 1/11/18 17:16 I 0.0 0.0 1686.0
4 1/15/18 22:55 I 0.0 0.0 3491.0
device_model group_call group_name group_number device_id
0 mediaserver 0.0 N N MasterSlaveService
1 mediaserver 0.0 N N MasterSlaveService
2 mediaserver 0.0 N N MasterSlaveService
3 mediaserver 0.0 N N MasterSlaveService
4 mediaserver 0.0 N N MasterSlaveService
history_event_state created_time updated_time group_type
0 A 1/8/18 19:49 1/8/18 19:49 N
1 A 1/8/18 20:09 1/8/18 20:09 NaN
2 A 1/9/18 20:31 1/9/18 20:31 N
3 A 1/11/18 17:16 1/11/18 17:16 N
4 A 1/15/18 22:55 1/15/18 22:55 N
Update:
def ts_periods(f_nm, d_list, d_df):
t_df=d_df.copy()
for i in d_list:
if i=='year':
t_df[f_nm+'_Year']=pd.DatetimeIndex(t_df[f_nm]).year
elif i=='month':
t_df[f_nm+'_month']=pd.DatetimeIndex(t_df[f_nm]).month
elif i=='weekday':
t_df[f_nm+'_weekday']=pd.DatetimeIndex(t_df[f_nm]).weekday_name
elif i=='week' in d_list:
t_df[f_nm+'_week']=pd.DatetimeIndex(t_df[f_nm]).week
elif i=='hour':
t_df[f_nm+'_hour']=pd.DatetimeIndex(t_df[f_nm]).hour
elif i=='minute':
t_df[f_nm+'_minute']=pd.DatetimeIndex(t_df[f_nm]).minute
return t_df
python-3.x pandas timestamp time-series
add a comment |
I have data like the Data_df example dataframe below. I'm wondering if there's a way to create new columns for all the time dimensions in one of the timestamp fields for example like the 'start_timestamp' field. I'd like to create new columns for the year, month, weekday, hour, minute based on the 'start_timestamp' column. I know I could code for each time dimension manually but I'm wondering if there's a way to check the timestamp and create them automatically.
Data_df:
Unnamed: 0 call_history_id calllog_id
0 16358 1210746736 ca58d850-6fe6-4673-a049-ea4a2d8d7ecf
1 16361 1210976828 c005329b-955d-4d88-98a5-1c47e6a1cb80
2 16402 1217791595 050e9b83-54c2-4c87-abdd-32225c0d3189
3 16471 1228495414 45705ed1-a8e2-4a15-8941-5b0a40b7d409
4 27906 1245173592 04e56818-04a0-4704-ac86-31c31dac2370
call_id connection_id pbx_name pbx_id extension_number
0 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
1 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
2 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
3 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
4 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
extension_id customer_id address name
0 595 2.525100e+29 14086694428 Sun Basket
1 595 2.525100e+29 13214371589 PEREZ,BRYAN
2 595 2.525100e+29 14088566290 14088566290
3 595 2.525100e+29 8059316676 Dialing
4 595 2.525100e+29 12028071151 Implementation Team
start_timestamp direction call_internal call_missed duration
0 1/8/18 19:49 I 0.0 0.0 4414.0
1 1/8/18 20:09 I 0.0 0.0 8300.0
2 1/9/18 20:31 I 0.0 0.0 14766.0
3 1/11/18 17:16 I 0.0 0.0 1686.0
4 1/15/18 22:55 I 0.0 0.0 3491.0
device_model group_call group_name group_number device_id
0 mediaserver 0.0 N N MasterSlaveService
1 mediaserver 0.0 N N MasterSlaveService
2 mediaserver 0.0 N N MasterSlaveService
3 mediaserver 0.0 N N MasterSlaveService
4 mediaserver 0.0 N N MasterSlaveService
history_event_state created_time updated_time group_type
0 A 1/8/18 19:49 1/8/18 19:49 N
1 A 1/8/18 20:09 1/8/18 20:09 NaN
2 A 1/9/18 20:31 1/9/18 20:31 N
3 A 1/11/18 17:16 1/11/18 17:16 N
4 A 1/15/18 22:55 1/15/18 22:55 N
Update:
def ts_periods(f_nm, d_list, d_df):
t_df=d_df.copy()
for i in d_list:
if i=='year':
t_df[f_nm+'_Year']=pd.DatetimeIndex(t_df[f_nm]).year
elif i=='month':
t_df[f_nm+'_month']=pd.DatetimeIndex(t_df[f_nm]).month
elif i=='weekday':
t_df[f_nm+'_weekday']=pd.DatetimeIndex(t_df[f_nm]).weekday_name
elif i=='week' in d_list:
t_df[f_nm+'_week']=pd.DatetimeIndex(t_df[f_nm]).week
elif i=='hour':
t_df[f_nm+'_hour']=pd.DatetimeIndex(t_df[f_nm]).hour
elif i=='minute':
t_df[f_nm+'_minute']=pd.DatetimeIndex(t_df[f_nm]).minute
return t_df
python-3.x pandas timestamp time-series
If yourstart_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….
– smj
Nov 25 '18 at 23:14
add a comment |
I have data like the Data_df example dataframe below. I'm wondering if there's a way to create new columns for all the time dimensions in one of the timestamp fields for example like the 'start_timestamp' field. I'd like to create new columns for the year, month, weekday, hour, minute based on the 'start_timestamp' column. I know I could code for each time dimension manually but I'm wondering if there's a way to check the timestamp and create them automatically.
Data_df:
Unnamed: 0 call_history_id calllog_id
0 16358 1210746736 ca58d850-6fe6-4673-a049-ea4a2d8d7ecf
1 16361 1210976828 c005329b-955d-4d88-98a5-1c47e6a1cb80
2 16402 1217791595 050e9b83-54c2-4c87-abdd-32225c0d3189
3 16471 1228495414 45705ed1-a8e2-4a15-8941-5b0a40b7d409
4 27906 1245173592 04e56818-04a0-4704-ac86-31c31dac2370
call_id connection_id pbx_name pbx_id extension_number
0 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
1 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
2 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
3 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
4 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
extension_id customer_id address name
0 595 2.525100e+29 14086694428 Sun Basket
1 595 2.525100e+29 13214371589 PEREZ,BRYAN
2 595 2.525100e+29 14088566290 14088566290
3 595 2.525100e+29 8059316676 Dialing
4 595 2.525100e+29 12028071151 Implementation Team
start_timestamp direction call_internal call_missed duration
0 1/8/18 19:49 I 0.0 0.0 4414.0
1 1/8/18 20:09 I 0.0 0.0 8300.0
2 1/9/18 20:31 I 0.0 0.0 14766.0
3 1/11/18 17:16 I 0.0 0.0 1686.0
4 1/15/18 22:55 I 0.0 0.0 3491.0
device_model group_call group_name group_number device_id
0 mediaserver 0.0 N N MasterSlaveService
1 mediaserver 0.0 N N MasterSlaveService
2 mediaserver 0.0 N N MasterSlaveService
3 mediaserver 0.0 N N MasterSlaveService
4 mediaserver 0.0 N N MasterSlaveService
history_event_state created_time updated_time group_type
0 A 1/8/18 19:49 1/8/18 19:49 N
1 A 1/8/18 20:09 1/8/18 20:09 NaN
2 A 1/9/18 20:31 1/9/18 20:31 N
3 A 1/11/18 17:16 1/11/18 17:16 N
4 A 1/15/18 22:55 1/15/18 22:55 N
Update:
def ts_periods(f_nm, d_list, d_df):
t_df=d_df.copy()
for i in d_list:
if i=='year':
t_df[f_nm+'_Year']=pd.DatetimeIndex(t_df[f_nm]).year
elif i=='month':
t_df[f_nm+'_month']=pd.DatetimeIndex(t_df[f_nm]).month
elif i=='weekday':
t_df[f_nm+'_weekday']=pd.DatetimeIndex(t_df[f_nm]).weekday_name
elif i=='week' in d_list:
t_df[f_nm+'_week']=pd.DatetimeIndex(t_df[f_nm]).week
elif i=='hour':
t_df[f_nm+'_hour']=pd.DatetimeIndex(t_df[f_nm]).hour
elif i=='minute':
t_df[f_nm+'_minute']=pd.DatetimeIndex(t_df[f_nm]).minute
return t_df
python-3.x pandas timestamp time-series
I have data like the Data_df example dataframe below. I'm wondering if there's a way to create new columns for all the time dimensions in one of the timestamp fields for example like the 'start_timestamp' field. I'd like to create new columns for the year, month, weekday, hour, minute based on the 'start_timestamp' column. I know I could code for each time dimension manually but I'm wondering if there's a way to check the timestamp and create them automatically.
Data_df:
Unnamed: 0 call_history_id calllog_id
0 16358 1210746736 ca58d850-6fe6-4673-a049-ea4a2d8d7ecf
1 16361 1210976828 c005329b-955d-4d88-98a5-1c47e6a1cb80
2 16402 1217791595 050e9b83-54c2-4c87-abdd-32225c0d3189
3 16471 1228495414 45705ed1-a8e2-4a15-8941-5b0a40b7d409
4 27906 1245173592 04e56818-04a0-4704-ac86-31c31dac2370
call_id connection_id pbx_name pbx_id extension_number
0 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
1 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
2 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
3 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
4 1.509170e+12 1.509170e+12 sales8x8 sales8x8 595
extension_id customer_id address name
0 595 2.525100e+29 14086694428 Sun Basket
1 595 2.525100e+29 13214371589 PEREZ,BRYAN
2 595 2.525100e+29 14088566290 14088566290
3 595 2.525100e+29 8059316676 Dialing
4 595 2.525100e+29 12028071151 Implementation Team
start_timestamp direction call_internal call_missed duration
0 1/8/18 19:49 I 0.0 0.0 4414.0
1 1/8/18 20:09 I 0.0 0.0 8300.0
2 1/9/18 20:31 I 0.0 0.0 14766.0
3 1/11/18 17:16 I 0.0 0.0 1686.0
4 1/15/18 22:55 I 0.0 0.0 3491.0
device_model group_call group_name group_number device_id
0 mediaserver 0.0 N N MasterSlaveService
1 mediaserver 0.0 N N MasterSlaveService
2 mediaserver 0.0 N N MasterSlaveService
3 mediaserver 0.0 N N MasterSlaveService
4 mediaserver 0.0 N N MasterSlaveService
history_event_state created_time updated_time group_type
0 A 1/8/18 19:49 1/8/18 19:49 N
1 A 1/8/18 20:09 1/8/18 20:09 NaN
2 A 1/9/18 20:31 1/9/18 20:31 N
3 A 1/11/18 17:16 1/11/18 17:16 N
4 A 1/15/18 22:55 1/15/18 22:55 N
Update:
def ts_periods(f_nm, d_list, d_df):
t_df=d_df.copy()
for i in d_list:
if i=='year':
t_df[f_nm+'_Year']=pd.DatetimeIndex(t_df[f_nm]).year
elif i=='month':
t_df[f_nm+'_month']=pd.DatetimeIndex(t_df[f_nm]).month
elif i=='weekday':
t_df[f_nm+'_weekday']=pd.DatetimeIndex(t_df[f_nm]).weekday_name
elif i=='week' in d_list:
t_df[f_nm+'_week']=pd.DatetimeIndex(t_df[f_nm]).week
elif i=='hour':
t_df[f_nm+'_hour']=pd.DatetimeIndex(t_df[f_nm]).hour
elif i=='minute':
t_df[f_nm+'_minute']=pd.DatetimeIndex(t_df[f_nm]).minute
return t_df
python-3.x pandas timestamp time-series
python-3.x pandas timestamp time-series
edited Nov 26 '18 at 5:17
modLmakur
asked Nov 25 '18 at 22:37
modLmakurmodLmakur
13019
13019
If yourstart_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….
– smj
Nov 25 '18 at 23:14
add a comment |
If yourstart_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….
– smj
Nov 25 '18 at 23:14
If your
start_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….– smj
Nov 25 '18 at 23:14
If your
start_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….– smj
Nov 25 '18 at 23:14
add a comment |
1 Answer
1
active
oldest
votes
A short example using your data and .dt
accessors. We first convert the data to a pandas timestamp, then access the dimensions we want:
import pandas as pd
data = pd.DataFrame(
{
'time_stamp': ['1/8/18 19:49', '1/9/18 20:31', '1/11/18 17:16']
}
)
data['time_stamp'] = pd.to_datetime(data['time_stamp'], dayfirst = True)
data['day_of_week'] = data['time_stamp'].dt.weekday
data['hour_of_day'] = data['time_stamp'].dt.hour
print(data)
Gives:
time_stamp day_of_week hour_of_day
0 2018-08-01 19:49:00 2 19
1 2018-09-01 20:31:00 5 20
2 2018-11-01 17:16:00 3 17
Documentation: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) inpandas
. Other libraries maybe? Could be any enhancement to pandas?
– smj
Nov 26 '18 at 19:22
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53472717%2fcreate-columns-for-all-time-dimensions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
A short example using your data and .dt
accessors. We first convert the data to a pandas timestamp, then access the dimensions we want:
import pandas as pd
data = pd.DataFrame(
{
'time_stamp': ['1/8/18 19:49', '1/9/18 20:31', '1/11/18 17:16']
}
)
data['time_stamp'] = pd.to_datetime(data['time_stamp'], dayfirst = True)
data['day_of_week'] = data['time_stamp'].dt.weekday
data['hour_of_day'] = data['time_stamp'].dt.hour
print(data)
Gives:
time_stamp day_of_week hour_of_day
0 2018-08-01 19:49:00 2 19
1 2018-09-01 20:31:00 5 20
2 2018-11-01 17:16:00 3 17
Documentation: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) inpandas
. Other libraries maybe? Could be any enhancement to pandas?
– smj
Nov 26 '18 at 19:22
add a comment |
A short example using your data and .dt
accessors. We first convert the data to a pandas timestamp, then access the dimensions we want:
import pandas as pd
data = pd.DataFrame(
{
'time_stamp': ['1/8/18 19:49', '1/9/18 20:31', '1/11/18 17:16']
}
)
data['time_stamp'] = pd.to_datetime(data['time_stamp'], dayfirst = True)
data['day_of_week'] = data['time_stamp'].dt.weekday
data['hour_of_day'] = data['time_stamp'].dt.hour
print(data)
Gives:
time_stamp day_of_week hour_of_day
0 2018-08-01 19:49:00 2 19
1 2018-09-01 20:31:00 5 20
2 2018-11-01 17:16:00 3 17
Documentation: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) inpandas
. Other libraries maybe? Could be any enhancement to pandas?
– smj
Nov 26 '18 at 19:22
add a comment |
A short example using your data and .dt
accessors. We first convert the data to a pandas timestamp, then access the dimensions we want:
import pandas as pd
data = pd.DataFrame(
{
'time_stamp': ['1/8/18 19:49', '1/9/18 20:31', '1/11/18 17:16']
}
)
data['time_stamp'] = pd.to_datetime(data['time_stamp'], dayfirst = True)
data['day_of_week'] = data['time_stamp'].dt.weekday
data['hour_of_day'] = data['time_stamp'].dt.hour
print(data)
Gives:
time_stamp day_of_week hour_of_day
0 2018-08-01 19:49:00 2 19
1 2018-09-01 20:31:00 5 20
2 2018-11-01 17:16:00 3 17
Documentation: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
A short example using your data and .dt
accessors. We first convert the data to a pandas timestamp, then access the dimensions we want:
import pandas as pd
data = pd.DataFrame(
{
'time_stamp': ['1/8/18 19:49', '1/9/18 20:31', '1/11/18 17:16']
}
)
data['time_stamp'] = pd.to_datetime(data['time_stamp'], dayfirst = True)
data['day_of_week'] = data['time_stamp'].dt.weekday
data['hour_of_day'] = data['time_stamp'].dt.hour
print(data)
Gives:
time_stamp day_of_week hour_of_day
0 2018-08-01 19:49:00 2 19
1 2018-09-01 20:31:00 5 20
2 2018-11-01 17:16:00 3 17
Documentation: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
answered Nov 25 '18 at 23:30
smjsmj
1,111613
1,111613
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) inpandas
. Other libraries maybe? Could be any enhancement to pandas?
– smj
Nov 26 '18 at 19:22
add a comment |
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) inpandas
. Other libraries maybe? Could be any enhancement to pandas?
– smj
Nov 26 '18 at 19:22
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
Thanks for getting back to me. I've added an update with the function I came up with. I was hoping there would be a way to do it without having to specify the dimensions (i.e. year, month, day, hour.) I was hoping there was a way that would already determine the dimensions in the timestamp and then create those fields automatically.
– modLmakur
Nov 26 '18 at 5:19
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) in
pandas
. Other libraries maybe? Could be any enhancement to pandas?– smj
Nov 26 '18 at 19:22
I'm not aware of any way to get around explicitly specifying the dimension you are after at some point (e.g. in a function or elsewhere in your code) in
pandas
. Other libraries maybe? Could be any enhancement to pandas?– smj
Nov 26 '18 at 19:22
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53472717%2fcreate-columns-for-all-time-dimensions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
If your
start_timestamp
field is stored as a datetime, you can use the date accessors. See the documentation: pandas.pydata.org/pandas-docs/stable/….– smj
Nov 25 '18 at 23:14